Sample records for sample size sufficient

  1. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  2. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  3. Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods

    DTIC Science & Technology

    2016-11-01

    ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample

  4. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  5. 75 FR 81789 - Third Party Testing for Certain Children's Products; Full-Size Baby Cribs and Non-Full-Size Baby...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ... sufficient samples of the product, or samples that are identical in all material respects to the product. The... 1220, Safety Standards for Full-Size Baby Cribs and Non-Full- Size Baby Cribs. A true copy, in English... assessment bodies seeking accredited status must submit to the Commission copies, in English, of their...

  6. Review of Sample Size for Structural Equation Models in Second Language Testing and Learning Research: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    In'nami, Yo; Koizumi, Rie

    2013-01-01

    The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…

  7. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  8. Analysis of variograms with various sample sizes from a multispectral image

    USDA-ARS?s Scientific Manuscript database

    Variogram plays a crucial role in remote sensing application and geostatistics. It is very important to estimate variogram reliably from sufficient data. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100x100-pixel subset was chosen from ...

  9. Planning Community-Based Assessments of HIV Educational Intervention Programs in Sub-Saharan Africa

    ERIC Educational Resources Information Center

    Kelcey, Ben; Shen, Zuchao

    2017-01-01

    A key consideration in planning studies of community-based HIV education programs is identifying a sample size large enough to ensure a reasonable probability of detecting program effects if they exist. Sufficient sample sizes for community- or group-based designs are proportional to the correlation or similarity of individuals within communities.…

  10. ADEQUACY OF VISUALLY CLASSIFIED PARTICLE COUNT STATISTICS FROM REGIONAL STREAM HABITAT SURVEYS

    EPA Science Inventory

    Streamlined sampling procedures must be used to achieve a sufficient sample size with limited resources in studies undertaken to evaluate habitat status and potential management-related habitat degradation at a regional scale. At the same time, these sampling procedures must achi...

  11. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    PubMed

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  12. The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival

    PubMed Central

    Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas

    2016-01-01

    Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561

  13. Revisiting sample size: are big trials the answer?

    PubMed

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  14. Sampling methods for amphibians in streams in the Pacific Northwest.

    Treesearch

    R. Bruce Bury; Paul Stephen Corn

    1991-01-01

    Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

  15. Accuracy in parameter estimation for targeted effects in structural equation modeling: sample size planning for narrow confidence intervals.

    PubMed

    Lai, Keke; Kelley, Ken

    2011-06-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association

  16. Annual design-based estimation for the annualized inventories of forest inventory and analysis: sample size determination

    Treesearch

    Hans T. Schreuder; Jin-Mann S. Lin; John Teply

    2000-01-01

    The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...

  17. Simulation analyses of space use: Home range estimates, variability, and sample size

    USGS Publications Warehouse

    Bekoff, Marc; Mech, L. David

    1984-01-01

    Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.

  18. Assessing Disfluencies in School-Age Children Who Stutter: How Much Speech Is Enough?

    ERIC Educational Resources Information Center

    Gregg, Brent A.; Sawyer, Jean

    2015-01-01

    The question of what size speech sample is sufficient to accurately identify stuttering and its myriad characteristics is a valid one. Short samples have a risk of over- or underrepresenting disfluency types or characteristics. In recent years, there has been a trend toward using shorter samples because they are less time-consuming for…

  19. DESCARTES' RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA.

    PubMed

    Bhaskar, Anand; Song, Yun S

    2014-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.

  20. DESCARTES’ RULE OF SIGNS AND THE IDENTIFIABILITY OF POPULATION DEMOGRAPHIC MODELS FROM GENOMIC VARIATION DATA1

    PubMed Central

    Bhaskar, Anand; Song, Yun S.

    2016-01-01

    The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011

  1. Comparison of day snorkeling, night snorkeling, and electrofishing to estimate bull trout abundance and size structure in a second-order Idaho stream

    Treesearch

    Russell F. Thurow; Daniel J. Schill

    1996-01-01

    Biologists lack sufficient information to develop protocols for sampling the abundance and size structure of bull trout Salvelinus confluentus. We compared summer estimates of the abundance and size structure of bull trout in a second-order central Idaho stream, derived by day snorkeling, night snorkeling, and electrofishing. We also examined the influence of water...

  2. Detection of bio-signature by microscopy and mass spectrometry

    NASA Astrophysics Data System (ADS)

    Tulej, M.; Wiesendanger, R.; Neuland, M., B.; Meyer, S.; Wurz, P.; Neubeck, A.; Ivarsson, M.; Riedo, V.; Moreno-Garcia, P.; Riedo, A.; Knopp, G.

    2017-09-01

    We demonstrate detection of micro-sized fossilized bacteria by means of microscopy and mass spectrometry. The characteristic structures of lifelike forms are visualized with a micrometre spatial resolution and mass spectrometric analyses deliver elemental and isotope composition of host and fossilized materials. Our studies show that high selectivity in isolation of fossilized material from host phase can be achieved while applying a microscope visualization (location), a laser ablation ion source with sufficiently small laser spot size and applying depth profiling method. Our investigations shows that fossilized features can be well isolated from host phase. The mass spectrometric measurements can be conducted with sufficiently high accuracy and precision yielding quantitative elemental and isotope composition of micro-sized objects. The current performance of the instrument allows the measurement of the isotope fractionation in per mill level and yield exclusively definition of the origin of the investigated species by combining optical visualization of investigated samples (morphology and texture), chemical characterization of host and embedded in the host micro-sized structure. Our isotope analyses involved bio-relevant B, C, S, and Ni isotopes which could be measured with sufficiently accuracy to conclude about the nature of the micro-sized objects.

  3. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  4. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  5. 10 CFR 431.135 - Units to be tested.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... EQUIPMENT Automatic Commercial Ice Makers Test Procedures § 431.135 Units to be tested. For each basic model of automatic commercial ice maker selected for testing, a sample of sufficient size shall be selected...

  6. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  7. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  8. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  9. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  10. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.

    PubMed

    McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M

    2015-03-01

    Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.

  11. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  12. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  13. NEVADA ARSENIC STUDY

    EPA Science Inventory

    The effects of exposure to arsenic in U.S. drinking water at low levels are difficult to assess. In particular, studies of sufficient sample size on US populations exposed to arsenic in drinking water are few. Churchill County, NV (population 25000) has arsenic levels in drinki...

  14. Determination of sample size for higher volatile data using new framework of Box-Jenkins model with GARCH: A case study on gold price

    NASA Astrophysics Data System (ADS)

    Roslindar Yaziz, Siti; Zakaria, Roslinazairimah; Hura Ahmad, Maizah

    2017-09-01

    The model of Box-Jenkins - GARCH has been shown to be a promising tool for forecasting higher volatile time series. In this study, the framework of determining the optimal sample size using Box-Jenkins model with GARCH is proposed for practical application in analysing and forecasting higher volatile data. The proposed framework is employed to daily world gold price series from year 1971 to 2013. The data is divided into 12 different sample sizes (from 30 to 10200). Each sample is tested using different combination of the hybrid Box-Jenkins - GARCH model. Our study shows that the optimal sample size to forecast gold price using the framework of the hybrid model is 1250 data of 5-year sample. Hence, the empirical results of model selection criteria and 1-step-ahead forecasting evaluations suggest that the latest 12.25% (5-year data) of 10200 data is sufficient enough to be employed in the model of Box-Jenkins - GARCH with similar forecasting performance as by using 41-year data.

  15. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  16. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  17. Interpolation Approach To Computer-Generated Holograms

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko

    1983-10-01

    A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.

  18. Capsule- and disk-filter procedure

    USGS Publications Warehouse

    Skrobialowski, Stanley C.

    2016-01-01

    Capsule and disk filters are disposable, self-contained units composed of a pleated or woven filter medium encased in a polypropylene or other plastic housing that can be connected inline to a sample-delivery system (such as a submersible or peristaltic pump) that generates sufficient pressure (positive or negative) to force water through the filter. Filter media are available in several pore sizes, but 0.45 µm is the pore size used routinely for most studies at this time. Capsule or disk filters (table 5.2.1.A.1) are required routinely for most studies when filtering samples for trace-element analyses and are recommended when filtering samples for major-ion or other inorganic-constituent analyses.

  19. A simple autocorrelation algorithm for determining grain size from digital images of sediment

    USGS Publications Warehouse

    Rubin, D.M.

    2004-01-01

    Autocorrelation between pixels in digital images of sediment can be used to measure average grain size of sediment on the bed, grain-size distribution of bed sediment, and vertical profiles in grain size in a cross-sectional image through a bed. The technique is less sensitive than traditional laboratory analyses to tails of a grain-size distribution, but it offers substantial other advantages: it is 100 times as fast; it is ideal for sampling surficial sediment (the part that interacts with a flow); it can determine vertical profiles in grain size on a scale finer than can be sampled physically; and it can be used in the field to provide almost real-time grain-size analysis. The technique can be applied to digital images obtained using any source with sufficient resolution, including digital cameras, digital video, or underwater digital microscopes (for real-time grain-size mapping of the bed). ?? 2004, SEPM (Society for Sedimentary Geology).

  20. Static versus dynamic sampling for data mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John, G.H.; Langley, P.

    1996-12-31

    As data warehouses grow to the point where one hundred gigabytes is considered small, the computational efficiency of data-mining algorithms on large databases becomes increasingly important. Using a sample from the database can speed up the datamining process, but this is only acceptable if it does not reduce the quality of the mined knowledge. To this end, we introduce the {open_quotes}Probably Close Enough{close_quotes} criterion to describe the desired properties of a sample. Sampling usually refers to the use of static statistical tests to decide whether a sample is sufficiently similar to the large database, in the absence of any knowledgemore » of the tools the data miner intends to use. We discuss dynamic sampling methods, which take into account the mining tool being used and can thus give better samples. We describe dynamic schemes that observe a mining tool`s performance on training samples of increasing size and use these results to determine when a sample is sufficiently large. We evaluate these sampling methods on data from the UCI repository and conclude that dynamic sampling is preferable.« less

  1. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  2. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    NASA Astrophysics Data System (ADS)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  3. The Army Communications Objectives Measurement System (ACOMS): Survey Design

    DTIC Science & Technology

    1988-04-01

    monthly basis so that the annual sample includes sufficient Hispanics to detect at the .80 power level: (1) Year-to-year changes of 3% in item...Hispanics. The requirements are listed in terms of power level and must be translated into requisite sample sizes. The requirements are expressed as the...annual samples needed to detect certain differences at the 80% power level. Differences in both directions are to be examined, so that a two-tailed

  4. Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders

    USGS Publications Warehouse

    Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael

    2015-01-01

    Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.

  5. Bony pelvic canal size and shape in relation to body proportionality in humans.

    PubMed

    Kurki, Helen K

    2013-05-01

    Obstetric selection acts on the female pelvic canal to accommodate the human neonate and contributes to pelvic sexual dimorphism. There is a complex relationship between selection for obstetric sufficiency and for overall body size in humans. The relationship between selective pressures may differ among populations of different body sizes and proportions, as pelvic canal dimensions vary among populations. Size and shape of the pelvic canal in relation to body size and shape were examined using nine skeletal samples (total female n = 57; male n = 84) from diverse geographical regions. Pelvic, vertebral, and lower limb bone measurements were collected. Principal component analyses demonstrate pelvic canal size and shape differences among the samples. Male multivariate variance in pelvic shape is greater than female variance for North and South Africans. High-latitude samples have larger and broader bodies, and pelvic canals of larger size and, among females, relatively broader medio-lateral dimensions relative to low-latitude samples, which tend to display relatively expanded inlet antero-posterior (A-P) and posterior canal dimensions. Differences in canal shape exist among samples that are not associated with latitude or body size, suggesting independence of some canal shape characteristics from body size and shape. The South Africans are distinctive with very narrow bodies and small pelvic inlets relative to an elongated lower canal in A-P and posterior lengths. Variation in pelvic canal geometry among populations is consistent with a high degree of evolvability in the human pelvis. Copyright © 2013 Wiley Periodicals, Inc.

  6. Passive injection control for microfluidic systems

    DOEpatents

    Paul, Phillip H.; Arnold, Don W.; Neyer, David W.

    2004-12-21

    Apparatus for eliminating siphoning, "dead" regions, and fluid concentration gradients in microscale analytical devices. In its most basic embodiment, the present invention affords passive injection control for both electric field-driven and pressure-driven systems by providing additional fluid flow channels or auxiliary channels disposed on either side of a sample separation column. The auxiliary channels are sized such that volumetric fluid flow rate through these channels, while sufficient to move the sample away from the sample injection region in a timely fashion, is less than that through the sample separation channel or chromatograph.

  7. Evaluation of residual uranium contamination in the dirt floor of an abandoned metal rolling mill.

    PubMed

    Glassford, Eric; Spitz, Henry; Lobaugh, Megan; Spitler, Grant; Succop, Paul; Rice, Carol

    2013-02-01

    A single, large, bulk sample of uranium-contaminated material from the dirt floor of an abandoned metal rolling mill was separated into different types and sizes of aliquots to simulate samples that would be collected during site remediation. The facility rolled approximately 11,000 tons of hot-forged ingots of uranium metal approximately 60 y ago, and it has not been used since that time. Thirty small mass (≈ 0.7 g) and 15 large mass (≈ 70 g) samples were prepared from the heterogeneously contaminated bulk material to determine how measurements of the uranium contamination vary with sample size. Aliquots of bulk material were also resuspended in an exposure chamber to produce six samples of respirable particles that were obtained using a cascade impactor. Samples of removable surface contamination were collected by wiping 100 cm of the interior surfaces of the exposure chamber with 47-mm-diameter fiber filters. Uranium contamination in each of the samples was measured directly using high-resolution gamma ray spectrometry. As expected, results for isotopic uranium (i.e., U and U) measured with the large-mass and small-mass samples are significantly different (p < 0.001), and the coefficient of variation (COV) for the small-mass samples was greater than for the large-mass samples. The uranium isotopic concentrations measured in the air and on the wipe samples were not significantly different and were also not significantly different (p > 0.05) from results for the large- or small-mass samples. Large-mass samples are more reliable for characterizing heterogeneously distributed radiological contamination than small-mass samples since they exhibit the least variation compared to the mean. Thus, samples should be sufficiently large in mass to insure that the results are truly representative of the heterogeneously distributed uranium contamination present at the facility. Monitoring exposure of workers and the public as a result of uranium contamination resuspended during site remediation should be evaluated using samples of sufficient size and type to accommodate the heterogeneous distribution of uranium in the bulk material.

  8. Internal pilots for a class of linear mixed models with Gaussian and compound symmetric data

    PubMed Central

    Gurka, Matthew J.; Coffey, Christopher S.; Muller, Keith E.

    2015-01-01

    SUMMARY An internal pilot design uses interim sample size analysis, without interim data analysis, to adjust the final number of observations. The approach helps to choose a sample size sufficiently large (to achieve the statistical power desired), but not too large (which would waste money and time). We report on recent research in cerebral vascular tortuosity (curvature in three dimensions) which would benefit greatly from internal pilots due to uncertainty in the parameters of the covariance matrix used for study planning. Unfortunately, observations correlated across the four regions of the brain and small sample sizes preclude using existing methods. However, as in a wide range of medical imaging studies, tortuosity data have no missing or mistimed data, a factorial within-subject design, the same between-subject design for all responses, and a Gaussian distribution with compound symmetry. For such restricted models, we extend exact, small sample univariate methods for internal pilots to linear mixed models with any between-subject design (not just two groups). Planning a new tortuosity study illustrates how the new methods help to avoid sample sizes that are too small or too large while still controlling the type I error rate. PMID:17318914

  9. Proteomic Challenges: Sample Preparation Techniques for Microgram-Quantity Protein Analysis from Biological Samples

    PubMed Central

    Feist, Peter; Hummon, Amanda B.

    2015-01-01

    Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower) and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed. PMID:25664860

  10. Clinical decision making and the expected value of information.

    PubMed

    Willan, Andrew R

    2007-01-01

    The results of the HOPE study, a randomized clinical trial, provide strong evidence that 1) ramipril prevents the composite outcome of cardiovascular death, myocardial infarction or stroke in patients who are at high risk of a cardiovascular event and 2) ramipril is cost-effective at a threshold willingness-to-pay of $10,000 to prevent an event of the composite outcome. In this report the concept of the expected value of information is used to determine if the information provided by the HOPE study is sufficient for decision making in the US and Canada. and results Using the cost-effectiveness data from a clinical trial, or from a meta-analysis of several trials, one can determine, based on the number of future patients that would benefit from the health technology under investigation, the expected value of sample information (EVSI) of a future trial as a function of proposed sample size. If the EVSI exceeds the cost for any particular sample size then the current information is insufficient for decision making and a future trial is indicated. If, on the other hand, there is no sample size for which the EVSI exceeds the cost, then there is sufficient information for decision making and no future trial is required. Using the data from the HOPE study these concepts are applied for various assumptions regarding the fixed and variable cost of a future trial and the number of patients who would benefit from ramipril. Expected value of information methods provide a decision-analytic alternative to the standard likelihood methods for assessing the evidence provided by cost-effectiveness data from randomized clinical trials.

  11. 10 CFR 431.325 - Units to be tested.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... EQUIPMENT Metal Halide Lamp Ballasts and Fixtures Test Procedures § 431.325 Units to be tested. For each basic model of metal halide lamp ballast selected for testing, a sample of sufficient size, no less than... energy efficiency calculated as the measured output power to the lamp divided by the measured input power...

  12. Characterizing dispersal patterns in a threatened seabird with limited genetic structure

    Treesearch

    Laurie A. Hall; Per J. Palsboll; Steven R. Beissinger; James T. Harvey; Martine Berube; Martin G. Raphael; Kim Nelson; Richard T. Golightly; Laura McFarlane-Tranquilla; Scott H. Newman; M. Zachariah Peery

    2009-01-01

    Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age classes. We demonstrate that assignment methods can be rigorously used...

  13. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features.

    PubMed

    McDonald, Linda S; Panozzo, Joseph F; Salisbury, Phillip A; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective.

  14. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features

    PubMed Central

    McDonald, Linda S.; Panozzo, Joseph F.; Salisbury, Phillip A.; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective. PMID:27176469

  15. Sample Size in Clinical Cardioprotection Trials Using Myocardial Salvage Index, Infarct Size, or Biochemical Markers as Endpoint.

    PubMed

    Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan

    2016-03-09

    Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  16. Sample Size Methods for Estimating HIV Incidence from Cross-Sectional Surveys

    PubMed Central

    Brookmeyer, Ron

    2015-01-01

    Summary Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this paper we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this paper at the Biometrics website on Wiley Online Library. PMID:26302040

  17. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    PubMed

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  18. Relative Performance of Rescaling and Resampling Approaches to Model Chi Square and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Johnathan; Hancock, Gregory R.

    Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…

  19. Evaluating Classified MODIS Satellite Imagery as a Stratification Tool

    Treesearch

    Greg C. Liknes; Mark D. Nelson; Ronald E. McRoberts

    2004-01-01

    The Forest Inventory and Analysis (FIA) program of the USDA Forest Service collects forest attribute data on permanent plots arranged on a hexagonal network across all 50 states and Puerto Rico. Due to budget constraints, sample sizes sufficient to satisfy national FIA precision standards are seldom achieved for most inventory variables unless the estimation process is...

  20. An assessment of re-randomization methods in bark beetle (Scolytidae) trapping bioassays

    Treesearch

    Christopher J. Fettig; Christopher P. Dabney; Stepehen R. McKelvey; Robert R. Borys

    2006-01-01

    Numerous studies have explored the role of semiochemicals in the behavior of bark beetles (Scolytidae). Multiple funnel traps are often used to elucidate these behavioral responses. Sufficient sample sizes are obtained by using large numbers of traps to which treatments are randomly assigned once, or by frequent collection of trap catches and subsequent re-...

  1. Individual Differences in ERPs during Mental Rotation of Characters: Lateralization, and Performance Level

    ERIC Educational Resources Information Center

    Beste, Christian; Heil, Martin; Konrad, Carsten

    2010-01-01

    The cognitive process of imaging an object turning around is called mental rotation. Many studies have been put forward analyzing mental rotation by means of event-related potentials (ERPs). Event-related potentials (ERPs) were measured during mental rotation of characters in a sample (N = 82) with a sufficient size to obtain even small effects. A…

  2. Optimal Design in Three-Level Block Randomized Designs with Two Levels of Nesting: An ANOVA Framework with Random Effects

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2013-01-01

    Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…

  3. Prospective Evaluation of Intraprostatic Inflammation and Focal Atrophy as a Predictor of Risk of High-Grade Prostate Cancer and Recurrence after Prostatectomy

    DTIC Science & Technology

    2014-07-01

    the two trials. The expected sample size for this work was 100 cases and 200 controls. Tissue was sufficient for 291 of the men (Task 2 completed in...not collected in SELECT), physical activity (PCPT [not collected in SELECT), cigarette smoking status at randomization (SELECT), use of aspirin

  4. Estimates of Intraclass Correlation Coefficients from Longitudinal Group-Randomized Trials of Adolescent HIV/STI/Pregnancy Prevention Programs

    ERIC Educational Resources Information Center

    Glassman, Jill R.; Potter, Susan C.; Baumler, Elizabeth R.; Coyle, Karin K.

    2015-01-01

    Introduction: Group-randomized trials (GRTs) are one of the most rigorous methods for evaluating the effectiveness of group-based health risk prevention programs. Efficiently designing GRTs with a sample size that is sufficient for meeting the trial's power and precision goals while not wasting resources exceeding them requires estimates of the…

  5. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  6. Bridging scale gaps between regional maps of forest aboveground biomass and field sampling plots using TanDEM-X data

    NASA Astrophysics Data System (ADS)

    Ni, W.; Zhang, Z.; Sun, G.

    2017-12-01

    Several large-scale maps of forest AGB have been released [1] [2] [3]. However, these existing global or regional datasets were only approximations based on combining land cover type and representative values instead of measurements of actual forest aboveground biomass or forest heights [4]. Rodríguez-Veiga et al[5] reported obvious discrepancies of existing forest biomass stock maps with in-situ observations in Mexico. One of the biggest challenges to the credibility of these maps comes from the scale gaps between the size of field sampling plots used to develop(or validate) estimation models and the pixel size of these maps and the availability of field sampling plots with sufficient size for the verification of these products [6]. It is time-consuming and labor-intensive to collect sufficient number of field sampling data over the plot size of the same as resolutions of regional maps. The smaller field sampling plots cannot fully represent the spatial heterogeneity of forest stands as shown in Figure 1. Forest AGB is directly determined by forest heights, diameter at breast height (DBH) of each tree, forest density and tree species. What measured in the field sampling are the geometrical characteristics of forest stands including the DBH, tree heights and forest densities. The LiDAR data is considered as the best dataset for the estimation of forest AGB. The main reason is that LiDAR can directly capture geometrical features of forest stands by its range detection capabilities.The remotely sensed dataset, which is capable of direct measurements of forest spatial structures, may serve as a ladder to bridge the scale gaps between the pixel size of regional maps of forest AGB and field sampling plots. Several researches report that TanDEM-X data can be used to characterize the forest spatial structures [7, 8]. In this study, the forest AGB map of northeast China were produced using ALOS/PALSAR data taking TanDEM-X data as a bridges. The TanDEM-X InSAR data used in this study and forest AGB map was shown in Figure 2. The technique details and further analysis will be given in the final report. AcknowledgmentThis work was supported in part by the National Basic Research Program of China (Grant No. 2013CB733401, 2013CB733404), and in part by the National Natural Science Foundation of China (Grant Nos. 41471311, 41371357, 41301395).

  7. Catch of channel catfish with tandem-set hoop nets and gill nets in lentic systems of Nebraska

    USGS Publications Warehouse

    Richters, Lindsey K.; Pope, Kevin L.

    2011-01-01

    Twenty-six Nebraska water bodies representing two ecosystem types (small standing waters and large standing waters) were surveyed during 2008 and 2009 with tandem-set hoop nets and experimental gill nets to determine if similar trends existed in catch rates and size structures of channel catfish Ictalurus punctatus captured with these gears. Gear efficiency was assessed as the number of sets (nets) that would be required to capture 100 channel catfish given observed catch per unit effort (CPUE). Efficiency of gill nets was not correlated with efficiency of hoop nets for capturing channel catfish. Small sample sizes prohibited estimation of proportional size distributions in most surveys; in the four surveys for which sample size was sufficient to quantify length-frequency distributions of captured channel catfish, distributions differed between gears. The CPUE of channel catfish did not differ between small and large water bodies for either gear. While catch rates of hoop nets were lower than rates recorded in previous studies, this gear was more efficient than gill nets at capturing channel catfish. However, comparisons of size structure between gears may be problematic.

  8. Integrating scales of seagrass monitoring to meet conservation needs

    USGS Publications Warehouse

    Neckles, Hilary A.; Kopp, Blaine S.; Peterson, Bradley J.; Pooler, Penelope S.

    2012-01-01

    We evaluated a hierarchical framework for seagrass monitoring in two estuaries in the northeastern USA: Little Pleasant Bay, Massachusetts, and Great South Bay/Moriches Bay, New York. This approach includes three tiers of monitoring that are integrated across spatial scales and sampling intensities. We identified monitoring attributes for determining attainment of conservation objectives to protect seagrass ecosystems from estuarine nutrient enrichment. Existing mapping programs provided large-scale information on seagrass distribution and bed sizes (tier 1 monitoring). We supplemented this with bay-wide, quadrat-based assessments of seagrass percent cover and canopy height at permanent sampling stations following a spatially distributed random design (tier 2 monitoring). Resampling simulations showed that four observations per station were sufficient to minimize bias in estimating mean percent cover on a bay-wide scale, and sample sizes of 55 stations in a 624-ha system and 198 stations in a 9,220-ha system were sufficient to detect absolute temporal increases in seagrass abundance from 25% to 49% cover and from 4% to 12% cover, respectively. We made high-resolution measurements of seagrass condition (percent cover, canopy height, total and reproductive shoot density, biomass, and seagrass depth limit) at a representative index site in each system (tier 3 monitoring). Tier 3 data helped explain system-wide changes. Our results suggest tiered monitoring as an efficient and feasible way to detect and predict changes in seagrass systems relative to multi-scale conservation objectives.

  9. Adaptive control of turbulence intensity is accelerated by frugal flow sampling.

    PubMed

    Quinn, Daniel B; van Halder, Yous; Lentink, David

    2017-11-01

    The aerodynamic performance of vehicles and animals, as well as the productivity of turbines and energy harvesters, depends on the turbulence intensity of the incoming flow. Previous studies have pointed at the potential benefits of active closed-loop turbulence control. However, it is unclear what the minimal sensory and algorithmic requirements are for realizing this control. Here we show that very low-bandwidth anemometers record sufficient information for an adaptive control algorithm to converge quickly. Our online Newton-Raphson algorithm tunes the turbulence in a recirculating wind tunnel by taking readings from an anemometer in the test section. After starting at 9% turbulence intensity, the algorithm converges on values ranging from 10% to 45% in less than 12 iterations within 1% accuracy. By down-sampling our measurements, we show that very-low-bandwidth anemometers record sufficient information for convergence. Furthermore, down-sampling accelerates convergence by smoothing gradients in turbulence intensity. Our results explain why low-bandwidth anemometers in engineering and mechanoreceptors in biology may be sufficient for adaptive control of turbulence intensity. Finally, our analysis suggests that, if certain turbulent eddy sizes are more important to control than others, frugal adaptive control schemes can be particularly computationally effective for improving performance. © 2017 The Author(s).

  10. Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging.

    PubMed

    Evans, P G; Chahine, G; Grifone, R; Jacques, V L R; Spalenka, J W; Schülli, T U

    2013-11-01

    X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.

  11. Compact ultrahigh vacuum sample environments for x-ray nanobeam diffraction and imaging

    NASA Astrophysics Data System (ADS)

    Evans, P. G.; Chahine, G.; Grifone, R.; Jacques, V. L. R.; Spalenka, J. W.; Schülli, T. U.

    2013-11-01

    X-ray nanobeams present the opportunity to obtain structural insight in materials with small volumes or nanoscale heterogeneity. The effective spatial resolution of the information derived from nanobeam techniques depends on the stability and precision with which the relative position of the x-ray optics and sample can be controlled. Nanobeam techniques include diffraction, imaging, and coherent scattering, with applications throughout materials science and condensed matter physics. Sample positioning is a significant mechanical challenge for x-ray instrumentation providing vacuum or controlled gas environments at elevated temperatures. Such environments often have masses that are too large for nanopositioners capable of the required positional accuracy of the order of a small fraction of the x-ray spot size. Similarly, the need to place x-ray optics as close as 1 cm to the sample places a constraint on the overall size of the sample environment. We illustrate a solution to the mechanical challenge in which compact ion-pumped ultrahigh vacuum chambers with masses of 1-2 kg are integrated with nanopositioners. The overall size of the environment is sufficiently small to allow their use with zone-plate focusing optics. We describe the design of sample environments for elevated-temperature nanobeam diffraction experiments demonstrate in situ diffraction, reflectivity, and scanning nanobeam imaging of the ripening of Au crystallites on Si substrates.

  12. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    PubMed

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    USGS Publications Warehouse

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  14. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  15. The Effect of Size Fraction in Analyses of Benthic Foraminifera Assemblages: A Case Study Comparing Assemblages from the >125 μm and >150 μm Size Fractions

    NASA Astrophysics Data System (ADS)

    Weinkauf, Manuel F. G.; Milker, Yvonne

    2018-05-01

    Benthic Foraminifera assemblages are employed for past environmental reconstructions, as well as for biomonitoring studies in recent environments. Despite their established status for such applications, and existing protocols for sample treatment, not all studies using benthic Foraminifera employ the same methodology. For instance, there is no broad practical consensus whether to use the >125 µm or >150 µm size fraction for benthic foraminiferal assemblage analyses. Here, we use early Pleistocene material from the Pefka E section on the Island of Rhodes (Greece), which has been counted in both size fractions, to investigate whether a 25 µm difference in the counted fraction is already sufficient to have an impact on ecological studies. We analysed the influence of the difference in size fraction on studies of biodiversity as well as multivariate assemblage analyses of the sample material. We found that for both types of studies, the general trends remain the same regardless of the chosen size fraction, but in detail significant differences emerge which are not consistently distributed between samples. Studies which require a high degree of precision can thus not compare results from analyses that used different size fractions, and the inconsistent distribution of differences makes it impossible to develop corrections for this issue. We therefore advocate the consistent use of the >125 µm size fraction for benthic foraminiferal studies in the future.

  16. Improving tritium exposure reconstructions using accelerator mass spectrometry

    PubMed Central

    Hunt, J. R.; Vogel, J. S.; Knezovich, J. P.

    2010-01-01

    Direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples and permits greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Because existing methodologies for quantifying tritium have some significant limitations, the development of tritium AMS has allowed improvements in reconstructing tritium exposure concentrations from environmental measurements and provides an important additional tool in assessing the temporal and spatial distribution of chronic exposure. Tritium exposure reconstructions using AMS were previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases. Although the current quantification limit of tritium AMS is not adequate to determine natural environmental variations in tritium concentrations, it is expected to be sufficient for studies assessing possible health effects from chronic environmental tritium exposure. PMID:14735274

  17. Effect of ambient humidity on the rate at which blood spots dry and the size of the spot produced.

    PubMed

    Denniff, Philip; Woodford, Lynsey; Spooner, Neil

    2013-08-01

    For shipping and storage, dried blood spot (DBS) samples must be sufficiently dry to protect the integrity of the sample. When the blood is spotted the humidity has the potential to affect the size of the spot created and the speed at which it dries. The area of DBS produced on three types of substrates were not affected by the humidity under which they were generated. DBS samples reached a steady moisture content 150 min after spotting and 90 min for humidities less than 60% relative humidity. All packaging materials examined provided some degree of protection from external extreme conditions. However, none of the packaging examined provided a total moisture barrier to extreme environmental conditions. Humidity was shown not to affect the spot area and DBS samples were ready for shipping and storage 2 h after spotting. The packing solutions examined all provided good protection from external high humidity conditions.

  18. Are power calculations useful? A multicentre neuroimaging study

    PubMed Central

    Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward

    2014-01-01

    There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267

  19. Cancer classification through filtering progressive transductive support vector machine based on gene expression data

    NASA Astrophysics Data System (ADS)

    Lu, Xinguo; Chen, Dan

    2017-08-01

    Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.

  20. The visibility of controls and labels on electronic devices and their suitability for people with impaired vision.

    PubMed

    Tan, Hsuan; Boon, Mei Ying; Dain, Stephen J

    2014-01-01

    People with low vision complain of difficulty operating controls on electronic appliances and equipment which suggests that the readability of controls and their labels is below their ability. To investigate whether electronic appliances available today are designed with controls of sufficient size (at least 6/18 Snellen VA) and contrast (at least 30%) to facilitate identification and use by people with low vision. Controls and labels of electronic appliances for sale in retail stores in Singapore (January, February 2012) and a sample of domestic appliances in Sydney, Australia (October, November of 2011) were evaluated in terms of high- and low- importance in function, size and contrast (luminance and colour difference). Labels and controls of 96 electronic appliances were evaluated. All controls were of sufficient size but 22% (26/117) of high- and 27% (12/44) of low-importance controls measured had insufficient luminance contrast. 79% (152/192) of high- and 46% (24/52) of low-importance labels were of insufficient size. 17% (26/160) of the high- and 0.03% (1/33) of low-importance labels had insufficient luminance contrast. Most controls and labels of recently available electronic appliances can cause problems for operability in people with low vision.

  1. Strong consistency of nonparametric Bayes density estimation on compact metric spaces with applications to specific manifolds

    PubMed Central

    Bhattacharya, Abhishek; Dunson, David B.

    2012-01-01

    This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295

  2. Geoscience Education Research Methods: Thinking About Sample Size

    NASA Astrophysics Data System (ADS)

    Slater, S. J.; Slater, T. F.; CenterAstronomy; Physics Education Research

    2011-12-01

    Geoscience education research is at a critical point in which conditions are sufficient to propel our field forward toward meaningful improvements in geosciences education practices. Our field has now reached a point where the outcomes of our research is deemed important to endusers and funding agencies, and where we now have a large number of scientists who are either formally trained in geosciences education research, or who have dedicated themselves to excellence in this domain. At this point we now must collectively work through our epistemology, our rules of what methodologies will be considered sufficiently rigorous, and what data and analysis techniques will be acceptable for constructing evidence. In particular, we have to work out our answer to that most difficult of research questions: "How big should my 'N' be??" This paper presents a very brief answer to that question, addressing both quantitative and qualitative methodologies. Research question/methodology alignment, effect size and statistical power will be discussed, in addition to a defense of the notion that bigger is not always better.

  3. Robust gene selection methods using weighting schemes for microarray data analysis.

    PubMed

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  4. Infrared reflectance spectra: Effects of particle size, provenance and preparation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Yin-Fong; Myers, Tanya L.; Brauer, Carolyn S.

    2014-09-22

    We have recently developed methods for making more accurate infrared total and diffuse directional - hemispherical reflectance measurements using an integrating sphere. We have found that reflectance spectra of solids, especially powders, are influenced by a number of factors including the sample preparation method, the particle size and morphology, as well as the sample origin. On a quantitative basis we have investigated some of these parameters and the effects they have on reflectance spectra, particularly in the longwave infrared. In the IR the spectral features may be observed as either maxima or minima: In general, upward-going peaks in the reflectancemore » spectrum result from strong surface scattering, i.e. rays that are reflected from the surface without bulk penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. The light signals reflected from solids usually encompass all such effects, but with strong dependencies on particle size and preparation. This paper measures the reflectance spectra in the 1.3 – 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to observe the effects on the spectral features: Bulk materials were ground with a mortar and pestle and sieved to separate the samples into various size fractions between 5 and 500 microns. The median particle size is demonstrated to have large effects on the reflectance spectra. For certain minerals we also observe significant spectral change depending on the geologic origin of the sample. All three such effects (particle size, preparation and provenance) result in substantial change in the reflectance spectra for solid materials; successful identification algorithms will require sufficient flexibility to account for these parameters.« less

  5. Infrared reflectance spectra: effects of particle size, provenance and preparation

    NASA Astrophysics Data System (ADS)

    Su, Yin-Fong; Myers, Tanya L.; Brauer, Carolyn S.; Blake, Thomas A.; Forland, Brenda M.; Szecsody, J. E.; Johnson, Timothy J.

    2014-10-01

    We have recently developed methods for making more accurate infrared total and diffuse directional - hemispherical reflectance measurements using an integrating sphere. We have found that reflectance spectra of solids, especially powders, are influenced by a number of factors including the sample preparation method, the particle size and morphology, as well as the sample origin. On a quantitative basis we have investigated some of these parameters and the effects they have on reflectance spectra, particularly in the longwave infrared. In the IR the spectral features may be observed as either maxima or minima: In general, upward-going peaks in the reflectance spectrum result from strong surface scattering, i.e. rays that are reflected from the surface without bulk penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. The light signals reflected from solids usually encompass all such effects, but with strong dependencies on particle size and preparation. This paper measures the reflectance spectra in the 1.3 - 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to observe the effects on the spectral features: Bulk materials were ground with a mortar and pestle and sieved to separate the samples into various size fractions between 5 and 500 microns. The median particle size is demonstrated to have large effects on the reflectance spectra. For certain minerals we also observe significant spectral change depending on the geologic origin of the sample. All three such effects (particle size, preparation and provenance) result in substantial change in the reflectance spectra for solid materials; successful identification algorithms will require sufficient flexibility to account for these parameters.

  6. Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics

    PubMed Central

    Dowding, Irene; Haufe, Stefan

    2018-01-01

    Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885

  7. Optimizing cord blood sample cryopreservation.

    PubMed

    Harris, David T

    2012-03-01

    Cord blood (CB) banking is becoming more and more commonplace throughout the medical community, both in the USA and elsewhere. It is now generally recognized that storage of CB samples in multiple aliquots is the preferred approach to banking because it allows the greatest number of uses of the sample. However, it is unclear which are the best methodologies for cryopreservation and storage of the sample aliquots. In the current study we analyzed variables that could affect these processes. CB were processed into mononuclear cells (MNC) and frozen in commercially available human serum albumin (HSA) or autologous CB plasma using cryovials of various sizes and cryobags. The bacteriophage phiX174 was used as a model virus to test for cross-contamination. We observed that cryopreservation of CB in HSA, undiluted autologous human plasma and 50% diluted plasma was equivalent in terms of cell recovery and cell viability. We also found that cryopreservation of CB samples in either cryovials or cryobags displayed equivalent thermal characteristics. Finally, we demonstrated that overwrapping the CB storage container in an impermeable plastic sheathing was sufficient to prevent cross-sample viral contamination during prolonged storage in the liquid phase of liquid nitrogen dewar storage. CB may be cryopreserved in either vials or bags without concern for temperature stability. Sample overwrapping is sufficient to prevent microbiologic contamination of the samples while in liquid-phase liquid nitrogen storage.

  8. Blood platelet counts, morphology and morphometry in lions, Panthera leo.

    PubMed

    Du Plessis, L

    2009-09-01

    Due to logistical problems in obtaining sufficient blood samples from apparently healthy animals in the wild in order to establish normal haematological reference values, only limited information regarding the blood platelet count and morphology of free-living lions (Panthera leo) is available. This study provides information on platelet counts and describes their morphology with particular reference to size in two normal, healthy and free-ranging lion populations. Blood samples were collected from a total of 16 lions. Platelet counts, determined manually, ranged between 218 and 358 x 10(9)/l. Light microscopy showed mostly activated platelets of various sizes with prominent granules. At the ultrastructural level the platelets revealed typical mammalian platelet morphology. However, morphometric analysis revealed a significant difference (P < 0.001) in platelet size between the two groups of animals. Basic haematological information obtained in this study may be helpful in future comparative studies between animals of the same species as well as in other felids.

  9. Size-assortative mating and sexual size dimorphism are predictable from simple mechanics of mate-grasping behavior

    PubMed Central

    2010-01-01

    Background A major challenge in evolutionary biology is to understand the typically complex interactions between diverse counter-balancing factors of Darwinian selection for size assortative mating and sexual size dimorphism. It appears that rarely a simple mechanism could provide a major explanation of these phenomena. Mechanics of behaviors can predict animal morphology, such like adaptations to locomotion in animals from various of taxa, but its potential to predict size-assortative mating and its evolutionary consequences has been less explored. Mate-grasping by males, using specialized adaptive morphologies of their forelegs, midlegs or even antennae wrapped around female body at specific locations, is a general mating strategy of many animals, but the contribution of the mechanics of this wide-spread behavior to the evolution of mating behavior and sexual size dimorphism has been largely ignored. Results Here, we explore the consequences of a simple, and previously ignored, fact that in a grasping posture the position of the male's grasping appendages relative to the female's body is often a function of body size difference between the sexes. Using an approach taken from robot mechanics we model coercive grasping of females by water strider Gerris gracilicornis males during mating initiation struggles. We determine that the male optimal size (relative to the female size), which gives the males the highest grasping force, properly predicts the experimentally measured highest mating success. Through field sampling and simulation modeling of a natural population we determine that the simple mechanical model, which ignores most of the other hypothetical counter-balancing selection pressures on body size, is sufficient to account for size-assortative mating pattern as well as species-specific sexual dimorphism in body size of G. gracilicornis. Conclusion The results indicate how a simple and previously overlooked physical mechanism common in many taxa is sufficient to account for, or importantly contribute to, size-assortative mating and its consequences for the evolution of sexual size dimorphism. PMID:21092131

  10. Biofouling on buoyant marine plastics: An experimental study into the effect of size on surface longevity.

    PubMed

    Fazey, Francesca M C; Ryan, Peter G

    2016-03-01

    Recent estimates suggest that roughly 100 times more plastic litter enters the sea than is found floating at the sea surface, despite the buoyancy and durability of many plastic polymers. Biofouling by marine biota is one possible mechanism responsible for this discrepancy. Microplastics (<5 mm in diameter) are more scarce than larger size classes, which makes sense because fouling is a function of surface area whereas buoyancy is a function of volume; the smaller an object, the greater its relative surface area. We tested whether plastic items with high surface area to volume ratios sank more rapidly by submerging 15 different sizes of polyethylene samples in False Bay, South Africa, for 12 weeks to determine the time required for samples to sink. All samples became sufficiently fouled to sink within the study period, but small samples lost buoyancy much faster than larger ones. There was a direct relationship between sample volume (buoyancy) and the time to attain a 50% probability of sinking, which ranged from 17 to 66 days of exposure. Our results provide the first estimates of the longevity of different sizes of plastic debris at the ocean surface. Further research is required to determine how fouling rates differ on free floating debris in different regions and in different types of marine environments. Such estimates could be used to improve model predictions of the distribution and abundance of floating plastic debris globally. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. One-step estimation of networked population size: Respondent-driven capture-recapture with anonymity.

    PubMed

    Khan, Bilal; Lee, Hsuan-Wei; Fellows, Ian; Dombrowski, Kirk

    2018-01-01

    Size estimation is particularly important for populations whose members experience disproportionate health issues or pose elevated health risks to the ambient social structures in which they are embedded. Efforts to derive size estimates are often frustrated when the population is hidden or hard-to-reach in ways that preclude conventional survey strategies, as is the case when social stigma is associated with group membership or when group members are involved in illegal activities. This paper extends prior research on the problem of network population size estimation, building on established survey/sampling methodologies commonly used with hard-to-reach groups. Three novel one-step, network-based population size estimators are presented, for use in the context of uniform random sampling, respondent-driven sampling, and when networks exhibit significant clustering effects. We give provably sufficient conditions for the consistency of these estimators in large configuration networks. Simulation experiments across a wide range of synthetic network topologies validate the performance of the estimators, which also perform well on a real-world location-based social networking data set with significant clustering. Finally, the proposed schemes are extended to allow them to be used in settings where participant anonymity is required. Systematic experiments show favorable tradeoffs between anonymity guarantees and estimator performance. Taken together, we demonstrate that reasonable population size estimates are derived from anonymous respondent driven samples of 250-750 individuals, within ambient populations of 5,000-40,000. The method thus represents a novel and cost-effective means for health planners and those agencies concerned with health and disease surveillance to estimate the size of hidden populations. We discuss limitations and future work in the concluding section.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Jaejin; Woo, Jong-Hak; Mulchaey, John S.

    We perform a comprehensive study of X-ray cavities using a large sample of X-ray targets selected from the Chandra archive. The sample is selected to cover a large dynamic range including galaxy clusters, groups, and individual galaxies. Using β -modeling and unsharp masking techniques, we investigate the presence of X-ray cavities for 133 targets that have sufficient X-ray photons for analysis. We detect 148 X-ray cavities from 69 targets and measure their properties, including cavity size, angle, and distance from the center of the diffuse X-ray gas. We confirm the strong correlation between cavity size and distance from the X-raymore » center similar to previous studies. We find that the detection rates of X-ray cavities are similar among galaxy clusters, groups and individual galaxies, suggesting that the formation mechanism of X-ray cavities is independent of environment.« less

  13. The impact of the Sarbanes Oxley Act on auditing fees: An empirical study of the oil and gas industry

    NASA Astrophysics Data System (ADS)

    Ezelle, Ralph Wayne, Jr.

    2011-12-01

    This study examines auditing of energy firms prior and post Sarbanes Oxley Act of 2002. The research explores factors impacting the asset adjusted audit fee of oil and gas companies and specifically examines the effect of the Sarbanes Oxley Act. This research analyzes multiple year audit fees of the firms engaged in the oil and gas industry. Pooled samples were created to improve statistical power with sample sizes sufficient to test for medium and large effect size. The Sarbanes Oxley Act significantly increases a firm's asset adjusted audit fees. Additional findings are that part of the variance in audit fees was attributable to the market value of the enterprise, the number of subsidiaries, the receivables and inventory, debt ratio, non-profitability, and receipt of a going concern report.

  14. Breast Reference Set Application: Chris Li-FHCRC (2014) — EDRN Public Portal

    Cancer.gov

    This application proposes to use Reference Set #1. We request access to serum samples collected at the time of breast biopsy from subjects with IC (n=30) or benign disease without atypia (n=30). Statistical power: With 30 BC cases and 30 normal controls, a 25% difference in mean metabolite levels can be detected between groups with 80% power and α=0.05, assuming coefficients of variation of 30%, consistent with our past studies. These sample sizes appear sufficient to enable detection of changes similar in magnitude to those previously reported in pre-clinical (BC recurrence) specimens (20).

  15. Ground truth crop proportion summaries for US segments, 1976-1979

    NASA Technical Reports Server (NTRS)

    Horvath, R. (Principal Investigator); Rice, D.; Wessling, T.

    1981-01-01

    The original ground truth data was collected, digitized, and registered to LANDSAT data for use in the LACIE and AgRISTARS projects. The numerous ground truth categories were consolidated into fewer classes of crops or crop conditions and counted occurrences of these classes for each segment. Tables are presented in which the individual entries are the percentage of total segment area assigned to a given class. The ground truth summaries were prepared from a 20% sample of the scene. An analysis indicates that this size of sample provides sufficient accuracy for use of the data in initial segment screening.

  16. Risk factors for lower extremity injury: a review of the literature

    PubMed Central

    Murphy, D; Connolly, D; Beynnon, B

    2003-01-01

    Prospective studies on risk factors for lower extremity injury are reviewed. Many intrinsic and extrinsic risk factors have been implicated; however, there is little agreement with respect to the findings. Future prospective studies are needed using sufficient sample sizes of males and females, including collection of exposure data, and using established methods for identifying and classifying injury severity to conclusively determine addtional risk factors for lower extremity injury. PMID:12547739

  17. [An investigation of the statistical power of the effect size in randomized controlled trials for the treatment of patients with type 2 diabetes mellitus using Chinese medicine].

    PubMed

    Ma, Li-Xin; Liu, Jian-Ping

    2012-01-01

    To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.

  18. Degradation of radiator performance on Mars due to dust

    NASA Technical Reports Server (NTRS)

    Gaier, James R.; Perez-Davis, Marla E.; Rutledge, Sharon K.; Forkapa, Mark

    1992-01-01

    An artificial mineral of the approximate elemental composition of Martian soil was manufactured, crushed, and sorted into four different size ranges. Dust particles from three of these size ranges were applied to arc-textured Nb-1 percent Zr and Cu radiator surfaces to assess their effect on radiator performance. Particles larger than 75 microns did not have sufficient adhesive forces to adhere to the samples at angles greater than about 27 deg. Pre-deposited dust layers were largely removed by clear wind velocities greater than 40 m/s, or by dust-laden wind velocities as low as 25 m/s. Smaller dust grains were more difficult to remove. Abrasion was found to be significant only in high velocity winds (89 m/s or greater). Dust-laden winds were found to be more abrasive than clear wind. Initially dusted samples abraded less than initially clear samples in dust laden wind. Smaller dust particles of the simulant proved to be more abrasive than large. This probably indicates that the larger particles were in fact agglomerates.

  19. Purification of complex samples: Implementation of a modular and reconfigurable droplet-based microfluidic platform with cascaded deterministic lateral displacement separation modules

    PubMed Central

    Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie

    2018-01-01

    Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490

  20. Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens

    NASA Astrophysics Data System (ADS)

    Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl

    2016-01-01

    As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.

  1. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    USGS Publications Warehouse

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  2. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  3. Optimization of Scat Detection Methods for a Social Ungulate, the Wild Pig, and Experimental Evaluation of Factors Affecting Detection of Scat.

    PubMed

    Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  4. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE PAGES

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...

    2016-05-25

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  5. Spatial studies of planetary nebulae with IRAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawkins, G.W.; Zuckerman, B.

    1991-06-01

    The infrared sizes at the four IRAS wavelengths of 57 planetaries, most with 20-60 arcsec optical size, are derived from spatial deconvolution of one-dimensional survey mode scans. Survey observations from multiple detectors and hours confirmed (HCON) observations are combined to increase the sampling to a rate that is sufficient for successful deconvolution. The Richardson-Lucy deconvolution algorithm is used to obtain an increase in resolution of a factor of about 2 or 3 from the normal IRAS detector sizes of 45, 45, 90, and 180 arcsec at wavelengths 12, 25, 60, and 100 microns. Most of the planetaries deconvolve at 12more » and 25 microns to sizes equal to or smaller than the optical size. Some of the planetaries with optical rings 60 arcsec or more in diameter show double-peaked IRAS profiles. Many, such as NGC 6720 and NGC 6543 show all infrared sizes equal to the optical size, while others indicate increasing infrared size with wavelength. Deconvolved IRAS profiles are presented for the 57 planetaries at nearly all wavelengths where IRAS flux densities are 1-2 Jy or higher. 60 refs.« less

  6. Complex disease and phenotype mapping in the domestic dog

    PubMed Central

    Hayward, Jessica J.; Castelhano, Marta G.; Oliveira, Kyle C.; Corey, Elizabeth; Balkman, Cheryl; Baxter, Tara L.; Casal, Margret L.; Center, Sharon A.; Fang, Meiying; Garrison, Susan J.; Kalla, Sara E.; Korniliev, Pavel; Kotlikoff, Michael I.; Moise, N. S.; Shannon, Laura M.; Simpson, Kenneth W.; Sutter, Nathan B.; Todhunter, Rory J.; Boyko, Adam R.

    2016-01-01

    The domestic dog is becoming an increasingly valuable model species in medical genetics, showing particular promise to advance our understanding of cancer and orthopaedic disease. Here we undertake the largest canine genome-wide association study to date, with a panel of over 4,200 dogs genotyped at 180,000 markers, to accelerate mapping efforts. For complex diseases, we identify loci significantly associated with hip dysplasia, elbow dysplasia, idiopathic epilepsy, lymphoma, mast cell tumour and granulomatous colitis; for morphological traits, we report three novel quantitative trait loci that influence body size and one that influences fur length and shedding. Using simulation studies, we show that modestly larger sample sizes and denser marker sets will be sufficient to identify most moderate- to large-effect complex disease loci. This proposed design will enable efficient mapping of canine complex diseases, most of which have human homologues, using far fewer samples than required in human studies. PMID:26795439

  7. Parameter recovery, bias and standard errors in the linear ballistic accumulator model.

    PubMed

    Visser, Ingmar; Poessé, Rens

    2017-05-01

    The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.

  8. Women's health: periodontitis and its relation to hormonal changes, adverse pregnancy outcomes and osteoporosis.

    PubMed

    Krejci, Charlene B; Bissada, Nabil F

    2012-01-01

    To examine the literature with respect to periodontitis and issues specific to women's health, namely, hormonal changes, adverse pregnancy outcomes and osteoporosis. The literature was evaluated to review reported associations between periodontitis and genderspecific issues, namely, hormonal changes, adverse pregnancy outcomes and osteoporosis. Collectively, the literature provided a large body of evidence that supports various associations between periodontitis and hormonal changes, adverse pregnancy outcomes and osteoporosis; however, certain shortcomings were noted with respect to biases involving definitions, sample sizes and confounding variables. Specific cause and effect relationships could not be delineated at this time and neither could definitive treatment interventions. Future research must include randomised controlled trials with consistent definitions, adequate controls and sufficiently large sample sizes in order to clarify specific associations, identify cause and effect relationships, define treatment options and determine treatment interventions which will lessen the untoward effects on the at-risk populations.

  9. Kidney function endpoints in kidney transplant trials: a struggle for power.

    PubMed

    Ibrahim, A; Garg, A X; Knoll, G A; Akbari, A; White, C A

    2013-03-01

    Kidney function endpoints are commonly used in randomized controlled trials (RCTs) in kidney transplantation (KTx). We conducted this study to estimate the proportion of ongoing RCTs with kidney function endpoints in KTx where the proposed sample size is large enough to detect meaningful differences in glomerular filtration rate (GFR) with adequate statistical power. RCTs were retrieved using the key word "kidney transplantation" from the National Institute of Health online clinical trial registry. Included trials had at least one measure of kidney function tracked for at least 1 month after transplant. We determined the proportion of two-arm parallel trials that had sufficient sample sizes to detect a minimum 5, 7.5 and 10 mL/min difference in GFR between arms. Fifty RCTs met inclusion criteria. Only 7% of the trials were above a sample size of 562, the number needed to detect a minimum 5 mL/min difference between the groups should one exist (assumptions: α = 0.05; power = 80%, 10% loss to follow-up, common standard deviation of 20 mL/min). The result increased modestly to 36% of trials when a minimum 10 mL/min difference was considered. Only a minority of ongoing trials have adequate statistical power to detect between-group differences in kidney function using conventional sample size estimating parameters. For this reason, some potentially effective interventions which ultimately could benefit patients may be abandoned from future assessment. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.

  10. THE CHALLENGE OF DETECTING CLASSICAL SWINE FEVER VIRUS CIRCULATION IN WILD BOAR (SUS SCROFA): SIMULATION OF SAMPLING OPTIONS.

    PubMed

    Sonnenburg, Jana; Schulz, Katja; Blome, Sandra; Staubach, Christoph

    2016-10-01

    Classical swine fever (CSF) is one of the most important viral diseases of domestic pigs ( Sus scrofa domesticus) and wild boar ( Sus scrofa ). For at least 4 decades, several European Union member states were confronted with outbreaks among wild boar and, as it had been shown that infected wild boar populations can be a major cause of primary outbreaks in domestic pigs, strict control measures for both species were implemented. To guarantee early detection and to demonstrate freedom from disease, intensive surveillance is carried out based on a hunting bag sample. In this context, virologic investigations play a major role in the early detection of new introductions and in regions immunized with a conventional vaccine. The required financial resources and personnel for reliable testing are often large, and sufficient sample sizes to detect low virus prevalences are difficult to obtain. We conducted a simulation to model the possible impact of changes in sample size and sampling intervals on the probability of CSF virus detection based on a study area of 65 German hunting grounds. A 5-yr period with 4,652 virologic investigations was considered. Results suggest that low prevalences could not be detected with a justifiable effort. The simulation of increased sample sizes per sampling interval showed only a slightly better performance but would be unrealistic in practice, especially outside the main hunting season. Further studies on other approaches such as targeted or risk-based sampling for virus detection in connection with (marker) antibody surveillance are needed.

  11. High-capacity ice-recrystallization endpoint assay employing superhydrophobic coatings that is equivalent to the 'splat' assay.

    PubMed

    Graham, Laurie A; Agrawal, Prashant; Oleschuk, Richard D; Davies, Peter L

    2018-04-01

    We have developed an ice recrystallization inhibition (IRI) assay system that allows the side-by-side comparison of up to a dozen samples treated in an identical manner. This system is ideal for determining, by serial dilution, the IRI 'endpoint' where the concentration of a sample is reached that can no longer inhibit recrystallization. Samples can be an order of magnitude smaller in volume (<1 μL) than those used for the conventional 'splat' assay. The samples are pipetted into wells cut out of a superhydrophobic coating on sapphire slides that are covered with a second slide and then snap-frozen in liquid nitrogen. Sapphire is greatly superior to glass in its ability to cool quickly without cracking. As a consequence, the samples freeze evenly as a multi-crystalline mass. The ice grain size is slightly larger than that obtained by the 'splat' assay but can be followed sufficiently well to assess IRI activity by changes in mean grain boundary size. The slides can be washed in detergent and reused with no carryover of IRI activity even from the highest protein concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Using random telephone sampling to recruit generalizable samples for family violence studies.

    PubMed

    Slep, Amy M Smith; Heyman, Richard E; Williams, Mathew C; Van Dyke, Cheryl E; O'Leary, Susan G

    2006-12-01

    Convenience sampling methods predominate in recruiting for laboratory-based studies within clinical and family psychology. The authors used random digit dialing (RDD) to determine whether they could feasibly recruit generalizable samples for 2 studies (a parenting study and an intimate partner violence study). RDD screen response rate was 42-45%; demographics matched those in the 2000 U.S. Census, with small- to medium-sized differences on race, age, and income variables. RDD respondents who qualified for, but did not participate in, the laboratory study of parents showed small differences on income, couple conflicts, and corporal punishment. Time and cost are detailed, suggesting that RDD may be a feasible, effective method by which to recruit more generalizable samples for in-laboratory studies of family violence when those studies have sufficient resources. (c) 2006 APA, all rights reserved.

  13. Particle size analysis of amalgam powder and handpiece generated specimens.

    PubMed

    Drummond, J L; Hathorn, R M; Cailas, M D; Karuhn, R

    2001-07-01

    The increasing interest in the elimination of amalgam particles from the dental waste (DW) stream, requires efficient devices to remove these particles. The major objective of this project was to perform a comparative evaluation of five basic methods of particle size analysis in terms of the instrument's ability to quantify the size distribution of the various components within the DW stream. The analytical techniques chosen were image analysis via scanning electron microscopy, standard wire mesh sieves, X-ray sedigraphy, laser diffraction, and electrozone analysis. The DW particle stream components were represented by amalgam powders and handpiece/diamond bur generated specimens of enamel; dentin, whole tooth, and condensed amalgam. Each analytical method quantified the examined DW particle stream components. However, X-ray sedigraphy, electrozone, and laser diffraction particle analyses provided similar results for determining particle distributions of DW samples. These three methods were able to more clearly quantify the properties of the examined powder and condensed amalgam samples. Furthermore, these methods indicated that a significant fraction of the DW stream contains particles less than 20 microm. The findings of this study indicated that the electrozone method is likely to be the most effective technique for quantifying the particle size distribution in the DW particle stream. This method required a relative small volume of sample, was not affected by density, shape factors or optical properties, and measured a sufficient number of particles to provide a reliable representation of the particle size distribution curve.

  14. 3D Diffraction Microscope Provides a First Deep View

    NASA Astrophysics Data System (ADS)

    Miao, Jianwei

    2005-03-01

    When a coherent diffraction pattern is sampled at a spacing sufficiently finer than the Bragg peak frequency (i.e. the inverse of the sample size), the phase information is in principle encoded inside the diffraction pattern, and can be directly retrieved by using an iterative process. In combination of this oversampling phasing method with either coherent X-rays or electrons, a novel form of diffraction microscopy has recently been developed to image nanoscale materials and biological structures. In this talk, I will present the principle of the oversampling method, discuss the first experimental demonstration of this microscope, and illustrate some applications in nanoscience and biology.

  15. Seven ways to increase power without increasing N.

    PubMed

    Hansen, W B; Collins, L M

    1994-01-01

    Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.

  16. Ultrasonic characterization of single drops of liquids

    DOEpatents

    Sinha, Dipen N.

    1998-01-01

    Ultrasonic characterization of single drops of liquids. The present invention includes the use of two closely spaced transducers, or one transducer and a closely spaced reflector plate, to form an interferometer suitable for ultrasonic characterization of droplet-size and smaller samples without the need for a container. The droplet is held between the interferometer elements, whose distance apart may be adjusted, by surface tension. The surfaces of the interferometer elements may be readily cleansed by a stream of solvent followed by purified air when it is desired to change samples. A single drop of liquid is sufficient for high-quality measurement. Examples of samples which may be investigated using the apparatus and method of the present invention include biological specimens (tear drops; blood and other body fluid samples; samples from tumors, tissues, and organs; secretions from tissues and organs; snake and bee venom, etc.) for diagnostic evaluation, samples in forensic investigations, and detection of drugs in small quantities.

  17. Measurements of Regolith Simulant Thermal Conductivity Under Asteroid and Mars Surface Conditions

    NASA Astrophysics Data System (ADS)

    Ryan, A. J.; Christensen, P. R.

    2017-12-01

    Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under rough to high vacuum and across a wide range of temperatures. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and eventually cementation. We present the experimental data and model results for a suite of samples that were selected to isolate and address regolith physical parameters that affect bulk conductivity. Spherical glass beads of various sizes were used to measure the effect of size frequency distribution. Spherical beads of polypropylene and well-rounded quartz sand have respectively lower and higher solid phase thermal conductivities than the glass beads and thus provide the opportunity to test the sensitivity of bulk conductivity to differences in solid phase conductivity. Gas pressure in our asteroid experimental chambers is held at 10^-6 torr, which is sufficient to negate gas thermal conduction in even our coarsest of samples. On Mars, the atmospheric pressure is such that the mean free path of the gas molecules is comparable to the pore size for many regolith particulates. Thus, subtle variations in pore size and/or atmospheric pressure can produce large changes in bulk regolith conductivity. For each sample measured in our martian environmental chamber, we repeat thermal measurement runs at multiple pressures to observe this behavior. Finally, we present conductivity measurements of angular basaltic simulant that is physically analogous to sand and gravel that may be present on Bennu. This simulant was used for OSIRIS-REx TAGSAM Sample Return Arm engineering tests. We measure the original size frequency distribution as well as several sorted size fractions. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe for the spacecraft and meets grain size requirements for sampling.

  18. Detection of internal structure by scattered light intensity: Application to kidney cell sorting

    NASA Technical Reports Server (NTRS)

    Goolsby, C. L.; Kunze, M. E.

    1985-01-01

    Scattered light measurements in flow cytometry were sucessfully used to distinguish cells on the basis of differing morphology and internal structure. Differences in scattered light patterns due to changes in internal structure would be expected to occur at large scattering angles. Practically, the results of these calculations suggest that in experimental situations an array of detectors would be useful. Although in general the detection of the scattered light intensity at several intervals within the 10 to 60 region would be sufficient, there are many examples where increased sensitivity could be acheived at other angles. The ability to measure at many different angular intervals would allow the experimenter to empirically select the optimum intervals for the varying conditions of cell size, N/C ratio, granule size and internal structure from sample to sample. The feasibility of making scattered light measurements at many different intervals in flow cytometry was demonstrated. The implementation of simplified versions of these techniques in conjunction with independant measurements of cell size could potentially improve the usefulness of flow cytometry in the study of the internal structure of cells.

  19. Dark field imaging system for size characterization of magnetic micromarkers

    NASA Astrophysics Data System (ADS)

    Malec, A.; Haiden, C.; Kokkinis, G.; Keplinger, F.; Giouroudi, I.

    2017-05-01

    In this paper we demonstrate a dark field video imaging system for the detection and size characterization of individual magnetic micromarkers suspended in liquid and the detection of pathogens utilizing magnetically labelled E.coli. The system follows dynamic processes and interactions of moving micro/nano objects close to or below the optical resolution limit, and is especially suitable for small sample volumes ( 10 μl). The developed detection method can be used to obtain clinical information about liquid contents when an additional biological protocol is provided, i.e., binding of microorganisms (e.g. E.coli) to specific magnetic markers. Some of the major advantages of our method are the increased sizing precision in the micro- and nano-range as well as the setup's simplicity making it a perfect candidate for miniaturized devices. Measurements can thus be carried out in a quick, inexpensive, and compact manner. A minor limitation is that the concentration range of micromarkers in a liquid sample needs to be adjusted in such a manner that the number of individual particles in the microscope's field of view is sufficient.

  20. Size distribution and growth rate of crystal nuclei near critical undercooling in small volumes

    NASA Astrophysics Data System (ADS)

    Kožíšek, Z.; Demo, P.

    2017-11-01

    Kinetic equations are numerically solved within standard nucleation model to determine the size distribution of nuclei in small volumes near critical undercooling. Critical undercooling, when first nuclei are detected within the system, depends on the droplet volume. The size distribution of nuclei reaches the stationary value after some time delay and decreases with nucleus size. Only a certain maximum size of nuclei is reached in small volumes near critical undercooling. As a model system, we selected recently studied nucleation in Ni droplet [J. Bokeloh et al., Phys. Rev. Let. 107 (2011) 145701] due to available experimental and simulation data. However, using these data for sample masses from 23 μg up to 63 mg (corresponding to experiments) leads to the size distribution of nuclei, when no critical nuclei in Ni droplet are formed (the number of critical nuclei < 1). If one takes into account the size dependence of the interfacial energy, the size distribution of nuclei increases to reasonable values. In lower volumes (V ≤ 10-9 m3) nucleus size reaches some maximum extreme size, which quickly increases with undercooling. Supercritical clusters continue their growth only if the number of critical nuclei is sufficiently high.

  1. Challenges in collecting clinical samples for research from pregnant women of South Asian origin: evidence from a UK study.

    PubMed

    Neelotpol, Sharmind; Hay, Alastair W M; Jolly, A Jim; Woolridge, Mike W

    2016-08-31

    To recruit South Asian pregnant women, living in the UK, into a clinicoepidemiological study for the collection of lifestyle survey data and antenatal blood and to retain the women for the later collection of cord blood and meconium samples from their babies for biochemical analysis. A longitudinal study recruiting pregnant women of South Asian and Caucasian origin living in the UK. Recruitment of the participants, collection of clinical samples and survey data took place at the 2 sites within a single UK Northern Hospital Trust. Pregnant women of South Asian origin (study group, n=98) and of Caucasian origin (comparison group, n=38) living in Leeds, UK. Among the participants approached, 81% agreed to take part in the study while a 'direct approach' method was followed. The retention rate of the participants was a remarkable 93.4%. The main challenges in recruiting the ethnic minority participants were their cultural and religious conservativeness, language barrier, lack of interest and feeling of extra 'stress' in taking part in research. The chief investigator developed an innovative participant retention method, associated with the women's cultural and religious practices. The method proved useful in retaining the participants for about 5 months and in enabling successful collection of clinical samples from the same mother-baby pairs. The collection of clinical samples and lifestyle data exceeded the calculated sample size required to give the study sufficient power. The numbers of samples obtained were: maternal blood (n=171), cord blood (n=38), meconium (n=176), lifestyle questionnaire data (n=136) and postnatal records (n=136). Recruitment and retention of participants, according to the calculated sample size, ensured sufficient power and success for a clinicoepidemiological study. Results suggest that development of trust and confidence between the participant and the researcher is the key to the success of a clinical and epidemiological study involving ethnic minorities. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  2. Peak-Flux-Density Spectra of Large Solar Radio Bursts and Proton Emission from Flares.

    DTIC Science & Technology

    1985-08-19

    of the microwave peak (Z 1000 sfu in U-bursts) served as an indicator that the energy release during the impulsive phase was sufficient to produce a... energy or wave- length tends to be prominent in all, and cautions about over-interpreting associa- tions/correlations observed in samples of big flares...Sung, L. S., and McDonald, F. B. (1975) The variation of solar proton energy spectra and size distribution with helio- longitude, Sol. Phys. 41: 189. 28

  3. Whale sharks target dense prey patches of sergestid shrimp off Tanzania

    PubMed Central

    Rohner, Christoph A.; Armstrong, Amelia J.; Pierce, Simon J.; Prebble, Clare E. M.; Cagua, E. Fernando; Cochran, Jesse E. M.; Berumen, Michael L.; Richardson, Anthony J.

    2015-01-01

    Large planktivores require high-density prey patches to make feeding energetically viable. This is a major challenge for species living in tropical and subtropical seas, such as whale sharks Rhincodon typus. Here, we characterize zooplankton biomass, size structure and taxonomic composition from whale shark feeding events and background samples at Mafia Island, Tanzania. The majority of whale sharks were feeding (73%, 380 of 524 observations), with the most common behaviour being active surface feeding (87%). We used 20 samples collected from immediately adjacent to feeding sharks and an additional 202 background samples for comparison to show that plankton biomass was ∼10 times higher in patches where whale sharks were feeding (25 vs. 2.6 mg m−3). Taxonomic analyses of samples showed that the large sergestid Lucifer hanseni (∼10 mm) dominated while sharks were feeding, accounting for ∼50% of identified items, while copepods (<2 mm) dominated background samples. The size structure was skewed towards larger animals representative of L.hanseni in feeding samples. Thus, whale sharks at Mafia Island target patches of dense, large, zooplankton dominated by sergestids. Large planktivores, such as whale sharks, which generally inhabit warm oligotrophic waters, aggregate in areas where they can feed on dense prey to obtain sufficient energy. PMID:25814777

  4. Insight into Primordial Solar System Oxygen Reservoirs from Returned Cometary Samples

    NASA Technical Reports Server (NTRS)

    Brownlee, D. E.; Messenger, S.

    2004-01-01

    The recent successful rendezvous of the Stardust spacecraft with comet Wild-2 will be followed by its return of cometary dust to Earth in January 2006. Results from two separate dust impact detectors suggest that the spacecraft collected approximately the nominal fluence of at least 1,000 particles larger than 15 micrometers in size. While constituting only about one microgram total, these samples will be sufficient to answer many outstanding questions about the nature of cometary materials. More than two decades of laboratory studies of stratospherically collected interplanetary dust particles (IDPs) of similar size have established the necessary microparticle handling and analytical techniques necessary to study them. It is likely that some IDPs are in fact derived from comets, although complex orbital histories of individual particles have made these assignments difficult to prove. Analysis of bona fide cometary samples will be essential for answering some fundamental outstanding questions in cosmochemistry, such as (1) the proportion of interstellar and processed materials that comprise comets and (2) whether the Solar System had a O-16-rich reservoir. Abundant silicate stardust grains have recently been discovered in anhydrous IDPs, in far greater abundances (200 5,500 ppm) than those in meteorites (25 ppm). Insight into the more subtle O isotopic variations among chondrites and refractory phases will require significantly higher precision isotopic measurements on micrometer-sized samples than are currently available.

  5. Hand coverage by alcohol-based handrub varies: Volume and hand size matter.

    PubMed

    Zingg, Walter; Haidegger, Tamas; Pittet, Didier

    2016-12-01

    Visitors of an infection prevention and control conference performed hand hygiene with 1, 2, or 3 mL ultraviolet light-traced alcohol-based handrub. Coverage of palms, dorsums, and fingertips were measured by digital images. Palms of all hand sizes were sufficiently covered when 2 mL was applied, dorsums of medium and large hands were never sufficiently covered. Palmar fingertips were sufficiently covered when 2  or 3 mL was applied, and dorsal fingertips were never sufficiently covered. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  6. Experimental Effects on IR Reflectance Spectra: Particle Size and Morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiswenger, Toya N.; Myers, Tanya L.; Brauer, Carolyn S.

    For geologic and extraterrestrial samples it is known that both particle size and morphology can have strong effects on the species’ infrared reflectance spectra. Due to such effects, the reflectance spectra cannot be predicted from the absorption coefficients alone. This is because reflectance is both a surface as well as a bulk phenomenon, incorporating both dispersion as well as absorption effects. The same spectral features can even be observed as either a maximum or minimum. The complex effects depend on particle size and preparation, as well as the relative amplitudes of the optical constants n and k, i.e. the realmore » and imaginary components of the complex refractive index. While somewhat oversimplified, upward-going amplitude in the reflectance spectrum usually result from surface scattering, i.e. rays that have been reflected from the surface without penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. While the effects are well known, we report seminal measurements of reflectance along with quantified particle size of the samples, the sizing obtained from optical microscopy measurements. The size measurements are correlated with the reflectance spectra in the 1.3 – 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to understand the effects on the spectral features as a function of the mean grain size of the sample. We report results for both sodium sulfate Na2SO4 as well as ammonium sulfate (NH4)2SO4; the optical constants have been measured for (NH4)2SO4. To go a step further from the field to the laboratory we explore our understanding of particle size effects on reflectance spectra in the field using standoff detection. This has helped identify weaknesses and strengths in detection using standoff distances of up 160 meters away from the Target. The studies have shown that particle size has an enormous influence on the measured reflectance spectra of such materials; successful identification requires sufficient, representative reflectance data to include the particle sizes of interest.« less

  7. A spinner magnetometer for large Apollo lunar samples.

    PubMed

    Uehara, M; Gattacceca, J; Quesnel, Y; Lepaulard, C; Lima, E A; Manfredi, M; Rochette, P

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10 -7 Am 2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  8. A spinner magnetometer for large Apollo lunar samples

    NASA Astrophysics Data System (ADS)

    Uehara, M.; Gattacceca, J.; Quesnel, Y.; Lepaulard, C.; Lima, E. A.; Manfredi, M.; Rochette, P.

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10-7 Am2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  9. Necessary and sufficient conditions for R₀ to be a sum of contributions of fertility loops.

    PubMed

    Rueffler, Claus; Metz, Johan A J

    2013-03-01

    Recently, de-Camino-Beck and Lewis (Bull Math Biol 69:1341-1354, 2007) have presented a method that under certain restricted conditions allows computing the basic reproduction ratio R₀ in a simple manner from life cycle graphs, without, however, giving an explicit indication of these conditions. In this paper, we give various sets of sufficient and generically necessary conditions. To this end, we develop a fully algebraic counterpart of their graph-reduction method which we actually found more useful in concrete applications. Both methods, if they work, give a simple algebraic formula that can be interpreted as the sum of contributions of all fertility loops. This formula can be used in e.g. pest control and conservation biology, where it can complement sensitivity and elasticity analyses. The simplest of the necessary and sufficient conditions is that, for irreducible projection matrices, all paths from birth to reproduction have to pass through a common state. This state may be visible in the state representation for the chosen sampling time, but the passing may also occur in between sampling times, like a seed stage in the case of sampling just before flowering. Note that there may be more than one birth state, like when plants in their first year can already have different sizes at the sampling time. Also the common state may occur only later in life. However, in all cases R₀ allows a simple interpretation as the expected number of new individuals that in the next generation enter the common state deriving from a single individual in this state. We end with pointing to some alternative algebraically simple quantities with properties similar to those of R₀ that may sometimes be used to good effect in cases where no simple formula for R₀ exists.

  10. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  11. Sampling and counting genome rearrangement scenarios

    PubMed Central

    2015-01-01

    Background Even for moderate size inputs, there are a tremendous number of optimal rearrangement scenarios, regardless what the model is and which specific question is to be answered. Therefore giving one optimal solution might be misleading and cannot be used for statistical inferring. Statistically well funded methods are necessary to sample uniformly from the solution space and then a small number of samples are sufficient for statistical inferring. Contribution In this paper, we give a mini-review about the state-of-the-art of sampling and counting rearrangement scenarios, focusing on the reversal, DCJ and SCJ models. Above that, we also give a Gibbs sampler for sampling most parsimonious labeling of evolutionary trees under the SCJ model. The method has been implemented and tested on real life data. The software package together with example data can be downloaded from http://www.renyi.hu/~miklosi/SCJ-Gibbs/ PMID:26452124

  12. Constructing first-principles phase diagrams of amorphous LixSi using machine-learning-assisted sampling with an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Artrith, Nongnuch; Urban, Alexander; Ceder, Gerbrand

    2018-06-01

    The atomistic modeling of amorphous materials requires structure sizes and sampling statistics that are challenging to achieve with first-principles methods. Here, we propose a methodology to speed up the sampling of amorphous and disordered materials using a combination of a genetic algorithm and a specialized machine-learning potential based on artificial neural networks (ANNs). We show for the example of the amorphous LiSi alloy that around 1000 first-principles calculations are sufficient for the ANN-potential assisted sampling of low-energy atomic configurations in the entire amorphous LixSi phase space. The obtained phase diagram is validated by comparison with the results from an extensive sampling of LixSi configurations using molecular dynamics simulations and a general ANN potential trained to ˜45 000 first-principles calculations. This demonstrates the utility of the approach for the first-principles modeling of amorphous materials.

  13. Size, time, and asynchrony matter: the species-area relationship for parasites of freshwater fishes.

    PubMed

    Zelmer, Derek A

    2014-10-01

    The tendency to attribute species-area relationships to "island biogeography" effectively bypasses the examination of specific mechanisms that act to structure parasite communities. Positive covariation between fish size and infrapopulation richness should not be examined within the typical extinction-based paradigm, but rather should be addressed from the standpoint of differences in colonization potential among individual hosts. Although most mechanisms producing the aforementioned pattern constitute some variation of passive sampling, the deterministic aspects of the accumulation of parasite individuals by fish hosts makes untenable the suggestion that infracommunities of freshwater fishes are stochastic assemblages. At the component community level, application of extinction-dependent mechanisms might be appropriate, given sufficient time for colonization, but these structuring forces likely act indirectly through their effects on the host community to increase the probability of parasite persistence. At all levels, the passive sampling hypothesis is a relevant null model. The tendency for mechanisms that produce species-area relationships to produce nested subset patterns means that for most systems, the passive sampling hypothesis can be addressed through the application of appropriate null models of nested subset structure.

  14. Cost-efficient designs for three-arm trials with treatment delivered by health professionals: Sample sizes for a combination of nested and crossed designs

    PubMed Central

    Moerbeek, Mirjam

    2018-01-01

    Background This article studies the design of trials that compare three treatment conditions that are delivered by two types of health professionals. The one type of health professional delivers one treatment, and the other type delivers two treatments, hence, this design is a combination of a nested and crossed design. As each health professional treats multiple patients, the data have a nested structure. This nested structure has thus far been ignored in the design of such trials, which may result in an underestimate of the required sample size. In the design stage, the sample sizes should be determined such that a desired power is achieved for each of the three pairwise comparisons, while keeping costs or sample size at a minimum. Methods The statistical model that relates outcome to treatment condition and explicitly takes the nested data structure into account is presented. Mathematical expressions that relate sample size to power are derived for each of the three pairwise comparisons on the basis of this model. The cost-efficient design achieves sufficient power for each pairwise comparison at lowest costs. Alternatively, one may minimize the total number of patients. The sample sizes are found numerically and an Internet application is available for this purpose. The design is also compared to a nested design in which each health professional delivers just one treatment. Results Mathematical expressions show that this design is more efficient than the nested design. For each pairwise comparison, power increases with the number of health professionals and the number of patients per health professional. The methodology of finding a cost-efficient design is illustrated using a trial that compares treatments for social phobia. The optimal sample sizes reflect the costs for training and supervising psychologists and psychiatrists, and the patient-level costs in the three treatment conditions. Conclusion This article provides the methodology for designing trials that compare three treatment conditions while taking the nesting of patients within health professionals into account. As such, it helps to avoid underpowered trials. To use the methodology, a priori estimates of the total outcome variances and intraclass correlation coefficients must be obtained from experts’ opinions or findings in the literature. PMID:29316807

  15. Light scattering from an atomic gas under conditions of quantum degeneracy

    NASA Astrophysics Data System (ADS)

    Porozova, V. M.; Gerasimov, L. V.; Havey, M. D.; Kupriyanov, D. V.

    2018-05-01

    Elastic light scattering from a macroscopic atomic sample existing in the Bose-Einstein condensate phase reveals a unique physical configuration of interacting light and matter waves. However, the joint coherent dynamics of the optical excitation induced by an incident photon is influenced by the presence of incoherent scattering channels. For a sample of sufficient length the excitation transports as a polariton wave and the propagation Green's function obeys the scattering equation which we derive. The polariton dynamics could be tracked in the outgoing channel of the scattered photon as we show via numerical solution of the scattering equation for one-dimensional geometry. The results are analyzed and compared with predictions of the conventional macroscopic Maxwell theory for light scattering from a nondegenerate atomic sample of the same density and size.

  16. Mating System and Effective Population Size of the Overexploited Neotropical Tree (Myroxylon peruiferum L.f.) and Their Impact on Seedling Production.

    PubMed

    Silvestre, Ellida de Aguiar; Schwarcz, Kaiser Dias; Grando, Carolina; de Campos, Jaqueline Bueno; Sujii, Patricia Sanae; Tambarussi, Evandro Vagner; Macrini, Camila Menezes Trindade; Pinheiro, José Baldin; Brancalion, Pedro Henrique Santin; Zucchi, Maria Imaculada

    2018-03-16

    The reproductive system of a tree species has substantial impact on genetic diversity and structure within and among natural populations. Such information, should be considered when planning tree planting for forest restoration. Here, we describe the mating system and genetic diversity of an overexploited Neotropical tree, Myroxylon peruiferum L.f. (Fabaceae) sampled from a forest remnant (10 seed trees and 200 seeds) and assess whether the effective population size of nursery-grown seedlings (148 seedlings) is sufficient to prevent inbreeding depression in reintroduced populations. Genetic analyses were performed based on 8 microsatellite loci. M. peruiferum presented a mixed mating system with evidence of biparental inbreeding (t^m-t^s = 0.118). We found low levels of genetic diversity for M. peruiferum species (allelic richness: 1.40 to 4.82; expected heterozygosity: 0.29 to 0.52). Based on Ne(v) within progeny, we suggest a sample size of 47 seed trees to achieve an effective population size of 100. The effective population sizes for the nursery-grown seedlings were much smaller Ne = 27.54-34.86) than that recommended for short term Ne ≥ 100) population conservation. Therefore, to obtain a reasonable genetic representation of native tree species and prevent problems associated with inbreeding depression, seedling production for restoration purposes may require a much larger sampling effort than is currently used, a problem that is further complicated by species with a mixed mating system. This study emphasizes the need to integrate species reproductive biology into seedling production programs and connect conservation genetics with ecological restoration.

  17. Oxidation behaviour of Fe-Ni alloy nanoparticles synthesized by thermal plasma route

    NASA Astrophysics Data System (ADS)

    Ghodke, Neha; Kamble, Shalaka; Raut, Suyog; Puranik, Shridhar; Bhoraskar, S. V.; Rayaprol, Sudhindra; Mathe, V. L.

    2018-04-01

    Here we report synthesis of Fe-Ni nanoparticles using thermal plasma route. In thermal plasma, gas phase nucleation and growth at sufficiently higher temperature is observed. The synthesized Fe-Ni nanoparticles are examined by X-ray Diffraction, Raman Spectroscopy, Vibrating Sample Magnetometer and Thermo gravimetric Analysis. Formation of 16-21 nm sized Fe-Ni nanoparticles having surface oxidation show maximum value of magnetization of ˜107 emu/g. The sample synthesized at relatively low power (4kW) show presence of carbonaceous species whereas the high power (6 kW) synthesis does not depicts carbonaceous species. The presence of carbonaceous species protects oxidation of the nanoparticles significantly as evidenced from TGA data.

  18. Hydrodynamic Electron Flow and Hall Viscosity

    NASA Astrophysics Data System (ADS)

    Scaffidi, Thomas; Nandi, Nabhanila; Schmidt, Burkhard; Mackenzie, Andrew P.; Moore, Joel E.

    2017-06-01

    In metallic samples of small enough size and sufficiently strong momentum-conserving scattering, the viscosity of the electron gas can become the dominant process governing transport. In this regime, momentum is a long-lived quantity whose evolution is described by an emergent hydrodynamical theory. Furthermore, breaking time-reversal symmetry leads to the appearance of an odd component to the viscosity called the Hall viscosity, which has attracted considerable attention recently due to its quantized nature in gapped systems but still eludes experimental confirmation. Based on microscopic calculations, we discuss how to measure the effects of both the even and odd components of the viscosity using hydrodynamic electronic transport in mesoscopic samples under applied magnetic fields.

  19. Hydrodynamic Electron Flow and Hall Viscosity.

    PubMed

    Scaffidi, Thomas; Nandi, Nabhanila; Schmidt, Burkhard; Mackenzie, Andrew P; Moore, Joel E

    2017-06-02

    In metallic samples of small enough size and sufficiently strong momentum-conserving scattering, the viscosity of the electron gas can become the dominant process governing transport. In this regime, momentum is a long-lived quantity whose evolution is described by an emergent hydrodynamical theory. Furthermore, breaking time-reversal symmetry leads to the appearance of an odd component to the viscosity called the Hall viscosity, which has attracted considerable attention recently due to its quantized nature in gapped systems but still eludes experimental confirmation. Based on microscopic calculations, we discuss how to measure the effects of both the even and odd components of the viscosity using hydrodynamic electronic transport in mesoscopic samples under applied magnetic fields.

  20. Preparing Monodisperse Macromolecular Samples for Successful Biological Small-Angle X-ray and Neutron Scattering Experiments

    PubMed Central

    Jeffries, Cy M.; Graewert, Melissa A.; Blanchet, Clément E.; Langley, David B.; Whitten, Andrew E.; Svergun, Dmitri I

    2017-01-01

    Small-angle X-ray and neutron scattering (SAXS and SANS) are techniques used to extract structural parameters and determine the overall structures and shapes of biological macromolecules, complexes and assemblies in solution. The scattering intensities measured from a sample contain contributions from all atoms within the illuminated sample volume including the solvent and buffer components as well as the macromolecules of interest. In order to obtain structural information, it is essential to prepare an exactly matched solvent blank so that background scattering contributions can be accurately subtracted from the sample scattering to obtain the net scattering from the macromolecules in the sample. In addition, sample heterogeneity caused by contaminants, aggregates, mismatched solvents, radiation damage or other factors can severely influence and complicate data analysis so it is essential that the samples are pure and monodisperse for the duration of the experiment. This Protocol outlines the basic physics of SAXS and SANS and reveals how the underlying conceptual principles of the techniques ultimately ‘translate’ into practical laboratory guidance for the production of samples of sufficiently high quality for scattering experiments. The procedure describes how to prepare and characterize protein and nucleic acid samples for both SAXS and SANS using gel electrophoresis, size exclusion chromatography and light scattering. Also included are procedures specific to X-rays (in-line size exclusion chromatography SAXS) and neutrons, specifically preparing samples for contrast matching/variation experiments and deuterium labeling of proteins. PMID:27711050

  1. Targeted On-Demand Team Performance App Development

    DTIC Science & Technology

    2016-10-01

    from three sites; 6) Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes...statistical analyses, and examine any resulting qualitative data for trends or connections to statistical outcomes. On Schedule 21 Predictive...Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes.  What opportunities for

  2. Mindfulness Meditation for Substance Use Disorders: A Systematic Review

    PubMed Central

    Zgierska, Aleksandra; Rabago, David; Chawla, Neharika; Kushner, Kenneth; Koehler, Robert; Marlatt, Allan

    2009-01-01

    Relapse is common in substance use disorders (SUDs), even among treated individuals. The goal of this article was to systematically review the existing evidence on mindfulness meditation-based interventions (MM) for SUDs. The comprehensive search for and review of literature found over 2,000 abstracts and resulted in 25 eligible manuscripts (22 published, 3 unpublished: 8 RCTs, 7 controlled non-randomized, 6 non-controlled prospective, 2 qualitative studies, 1 case report). When appropriate, methodological quality, absolute risk reduction, number needed to treat, and effect size (ES) were assessed. Overall, although preliminary evidence suggests MM efficacy and safety, conclusive data for MM as a treatment of SUDs are lacking. Significant methodological limitations exist in most studies. Further, it is unclear which persons with SUDs might benefit most from MM. Future trials must be of sufficient sample size to answer a specific clinical question and should target both assessment of effect size and mechanisms of action. PMID:19904664

  3. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  4. A new tool called DISSECT for analysing large genomic data sets using a Big Data approach

    PubMed Central

    Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert

    2015-01-01

    Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010

  5. Changing Social Networks Among Homeless Individuals: A Prospective Evaluation of a Job- and Life-Skills Training Program.

    PubMed

    Gray, Heather M; Shaffer, Paige M; Nelson, Sarah E; Shaffer, Howard J

    2016-10-01

    Social networks play important roles in mental and physical health among the general population. Building healthier social networks might contribute to the development of self-sufficiency among people struggling to overcome homelessness and substance use disorders. In this study of homeless adults completing a job- and life-skills program (i.e., the Moving Ahead Program at St. Francis House, Boston), we prospectively examined changes in social network quality, size, and composition. Among the sample of participants (n = 150), we observed positive changes in social network quality over time. However, social network size and composition did not change among the full sample. The subset of participants who reported abstaining from alcohol during the months before starting the program reported healthy changes in their social networks; specifically, while completing the program, they re-structured their social networks such that fewer members of their network used alcohol to intoxication. We discuss practical implications of these findings.

  6. The correlation between the number of eligible patients in routine clinical practice and the low recruitment level in clinical trials: a retrospective study using electronic medical records.

    PubMed

    Sumi, Eriko; Teramukai, Satoshi; Yamamoto, Keiichi; Satoh, Motohiko; Yamanaka, Kenya; Yokode, Masayuki

    2013-12-11

    A number of clinical trials have encountered difficulties enrolling a sufficient number of patients upon initiating the trial. Recently, many screening systems that search clinical data warehouses for patients who are eligible for clinical trials have been developed. We aimed to estimate the number of eligible patients using routine electronic medical records (EMRs) and to predict the difficulty of enrolling sufficient patients prior to beginning a trial. Investigator-initiated clinical trials that were conducted at Kyoto University Hospital between July 2004 and January 2011 were included in this study. We searched the EMRs for eligible patients and calculated the eligible EMR patient index by dividing the number of eligible patients in the EMRs by the target sample size. Additionally, we divided the trial eligibility criteria into corresponding data elements in the EMRs to evaluate the completeness of mapping clinical manifestation in trial eligibility criteria into structured data elements in the EMRs. We evaluated the correlation between the index and the accrual achievement with Spearman's rank correlation coefficient. Thirteen of 19 trials did not achieve their original target sample size. Overall, 55% of the trial eligibility criteria were mapped into data elements in EMRs. The accrual achievement demonstrated a significant positive correlation with the eligible EMR patient index (r = 0.67, 95% confidence interval (CI), 0.42 to 0.92). The receiver operating characteristic analysis revealed an eligible EMR patient index cut-off value of 1.7, with a sensitivity of 69.2% and a specificity of 100.0%. Our study suggests that the eligible EMR patient index remains exploratory but could be a useful component of the feasibility study when planning a clinical trial. Establishing a step to check whether there are likely to be a sufficient number of eligible patients enables sponsors and investigators to concentrate their resources and efforts on more achievable trials.

  7. Re-evaluating the link between brain size and behavioural ecology in primates.

    PubMed

    Powell, Lauren E; Isler, Karin; Barton, Robert A

    2017-10-25

    Comparative studies have identified a wide range of behavioural and ecological correlates of relative brain size, with results differing between taxonomic groups, and even within them. In primates for example, recent studies contradict one another over whether social or ecological factors are critical. A basic assumption of such studies is that with sufficiently large samples and appropriate analysis, robust correlations indicative of selection pressures on cognition will emerge. We carried out a comprehensive re-examination of correlates of primate brain size using two large comparative datasets and phylogenetic comparative methods. We found evidence in both datasets for associations between brain size and ecological variables (home range size, diet and activity period), but little evidence for an effect of social group size, a correlation which has previously formed the empirical basis of the Social Brain Hypothesis. However, reflecting divergent results in the literature, our results exhibited instability across datasets, even when they were matched for species composition and predictor variables. We identify several potential empirical and theoretical difficulties underlying this instability and suggest that these issues raise doubts about inferring cognitive selection pressures from behavioural correlates of brain size. © 2017 The Author(s).

  8. Characterizing and predicting species distributions across environments and scales: Argentine ant occurrences in the eye of the beholder

    USGS Publications Warehouse

    Menke, S.B.; Holway, D.A.; Fisher, R.N.; Jetz, W.

    2009-01-01

    Aim: Species distribution models (SDMs) or, more specifically, ecological niche models (ENMs) are a useful and rapidly proliferating tool in ecology and global change biology. ENMs attempt to capture associations between a species and its environment and are often used to draw biological inferences, to predict potential occurrences in unoccupied regions and to forecast future distributions under environmental change. The accuracy of ENMs, however, hinges critically on the quality of occurrence data. ENMs often use haphazardly collected data rather than data collected across the full spectrum of existing environmental conditions. Moreover, it remains unclear how processes affecting ENM predictions operate at different spatial scales. The scale (i.e. grain size) of analysis may be dictated more by the sampling regime than by biologically meaningful processes. The aim of our study is to jointly quantify how issues relating to region and scale affect ENM predictions using an economically important and ecologically damaging invasive species, the Argentine ant (Linepithema humile). Location: California, USA. Methods: We analysed the relationship between sampling sufficiency, regional differences in environmental parameter space and cell size of analysis and resampling environmental layers using two independently collected sets of presence/absence data. Differences in variable importance were determined using model averaging and logistic regression. Model accuracy was measured with area under the curve (AUC) and Cohen's kappa. Results: We first demonstrate that insufficient sampling of environmental parameter space can cause large errors in predicted distributions and biological interpretation. Models performed best when they were parametrized with data that sufficiently sampled environmental parameter space. Second, we show that altering the spatial grain of analysis changes the relative importance of different environmental variables. These changes apparently result from how environmental constraints and the sampling distributions of environmental variables change with spatial grain. Conclusions: These findings have clear relevance for biological inference. Taken together, our results illustrate potentially general limitations for ENMs, especially when such models are used to predict species occurrences in novel environments. We offer basic methodological and conceptual guidelines for appropriate sampling and scale matching. ?? 2009 The Authors Journal compilation ?? 2009 Blackwell Publishing.

  9. Errors in Measuring Water Potentials of Small Samples Resulting from Water Adsorption by Thermocouple Psychrometer Chambers 1

    PubMed Central

    Bennett, Jerry M.; Cortes, Peter M.

    1985-01-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367

  10. Errors in measuring water potentials of small samples resulting from water adsorption by thermocouple psychrometer chambers.

    PubMed

    Bennett, J M; Cortes, P M

    1985-09-01

    The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.

  11. Socioeconomic Factors Influence Physical Activity and Sport in Quebec Schools.

    PubMed

    Morin, Pascale; Lebel, Alexandre; Robitaille, Éric; Bisset, Sherri

    2016-11-01

    School environments providing a wide selection of physical activities and sufficient facilities are both essential and formative to ensure young people adopt active lifestyles. We describe the association between school opportunities for physical activity and socioeconomic factors measured by low-income cutoff index, school size (number of students), and neighborhood population density. A cross-sectional survey using a 2-stage stratified sampling method built a representative sample of 143 French-speaking public schools in Quebec, Canada. Self-administered questionnaires collected data describing the physical activities offered and schools' sports facilities. Descriptive and bivariate analyses were performed separately for primary and secondary schools. In primary schools, school size was positively associated with more intramural and extracurricular activities, more diverse interior facilities, and activities promoting active transportation. Low-income primary schools were more likely to offer a single gym. Low-income secondary schools offered lower diversity of intramural activities and fewer exterior sporting facilities. High-income secondary schools with a large school size provided a greater number of opportunities, larger infrastructures, and a wider selection of physical activities than smaller low-income schools. Results reveal an overall positive association between school availability of physical and sport activity and socioeconomic factors. © 2016, American School Health Association.

  12. Shape variation in the human pelvis and limb skeleton: Implications for obstetric adaptation.

    PubMed

    Kurki, Helen K; Decrausaz, Sarah-Louise

    2016-04-01

    Under the obstetrical dilemma (OD) hypothesis, selection acts on the human female pelvis to ensure a sufficiently sized obstetric canal for birthing a large-brained, broad shouldered neonate, while bipedal locomotion selects for a narrower and smaller pelvis. Despite this female-specific stabilizing selection, variability of linear dimensions of the pelvic canal and overall size are not reduced in females, suggesting shape may instead be variable among females of a population. Female canal shape has been shown to vary among populations, while male canal shape does not. Within this context, we examine within-population canal shape variation in comparison with that of noncanal aspects of the pelvis and the limbs. Nine skeletal samples (total female n = 101, male n = 117) representing diverse body sizes and shapes were included. Principal components analysis was applied to size-adjusted variables of each skeletal region. A multivariate variance was calculated using the weighted PC scores for all components in each model and F-ratios used to assess differences in within-population variances between sexes and skeletal regions. Within both sexes, multivariate canal shape variance is significantly greater than noncanal pelvis and limb variances, while limb variance is greater than noncanal pelvis variance in some populations. Multivariate shape variation is not consistently different between the sexes in any of the skeletal regions. Diverse selective pressures, including obstetrics, locomotion, load carrying, and others may act on canal shape, as well as genetic drift and plasticity, thus increasing variation in morphospace while protecting obstetric sufficiency. © 2015 Wiley Periodicals, Inc.

  13. Structural difference rule for amorphous alloy formation by ion mixing

    NASA Technical Reports Server (NTRS)

    Liu, B.-X.; Johnson, W. L.; Nicolet, M.A.; Lau, S. S.

    1983-01-01

    A rule is formulated which establishes a sufficient condition that an amorphous binary alloy will be formed by ion mixing of multilayered samples when the two constituent metals are of different crystalline structure, regardless of their atomic sizes and electronegativities. The rule is supported by the experimental results obtained on six selected binary metal systems, as well as by the previous data reported in the literature. The amorphization mechanism is discussed in terms of the competition between two different structures resulting in frustration of the crystallization process.

  14. Ultrasonic characterization of single drops of liquids

    DOEpatents

    Sinha, D.N.

    1998-04-14

    Ultrasonic characterization of single drops of liquids is disclosed. The present invention includes the use of two closely spaced transducers, or one transducer and a closely spaced reflector plate, to form an interferometer suitable for ultrasonic characterization of droplet-size and smaller samples without the need for a container. The droplet is held between the interferometer elements, whose distance apart may be adjusted, by surface tension. The surfaces of the interferometer elements may be readily cleansed by a stream of solvent followed by purified air when it is desired to change samples. A single drop of liquid is sufficient for high-quality measurement. Examples of samples which may be investigated using the apparatus and method of the present invention include biological specimens (tear drops; blood and other body fluid samples; samples from tumors, tissues, and organs; secretions from tissues and organs; snake and bee venom, etc.) for diagnostic evaluation, samples in forensic investigations, and detection of drugs in small quantities. 5 figs.

  15. Ultrasonic characterization of single drops of liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinha, D.N.

    Ultrasonic characterization of single drops of liquids is disclosed. The present invention includes the use of two closely spaced transducers, or one transducer and a closely spaced reflector plate, to form an interferometer suitable for ultrasonic characterization of droplet-size and smaller samples without the need for a container. The droplet is held between the interferometer elements, whose distance apart may be adjusted, by surface tension. The surfaces of the interferometer elements may be readily cleansed by a stream of solvent followed by purified air when it is desired to change samples. A single drop of liquid is sufficient for high-qualitymore » measurement. Examples of samples which may be investigated using the apparatus and method of the present invention include biological specimens (tear drops; blood and other body fluid samples; samples from tumors, tissues, and organs; secretions from tissues and organs; snake and bee venom, etc.) for diagnostic evaluation, samples in forensic investigations, and detection of drugs in small quantities. 5 figs.« less

  16. Comparative analysis of laparoscopic and ultrasound-guided biopsy methods for gene expression analysis in transgenic goats.

    PubMed

    Melo, C H; Sousa, F C; Batista, R I P T; Sanchez, D J D; Souza-Fabjan, J M G; Freitas, V J F; Melo, L M; Teixeira, D I A

    2015-07-31

    The present study aimed to compare laparoscopic (LP) and ultrasound-guided (US) biopsy methods to obtain either liver or splenic tissue samples for ectopic gene expression analysis in transgenic goats. Tissue samples were collected from human granulocyte colony stimulating factor (hG-CSF)-transgenic bucks and submitted to real-time PCR for the endogenous genes (Sp1, Baff, and Gapdh) and the transgene (hG-CSF). Both LP and US biopsy methods were successful in obtaining liver and splenic samples that could be analyzed by PCR (i.e., sufficient sample sizes and RNA yield were obtained). Although the number of attempts made to obtain the tissue samples was similar (P > 0.05), LP procedures took considerably longer than the US method (P = 0.03). Finally, transgene transcripts were not detected in spleen or liver samples. Thus, for the phenotypic characterization of a transgenic goat line, investigation of ectopic gene expression can be made successfully by LP or US biopsy, avoiding the traditional approach of euthanasia.

  17. Drying regimes in homogeneous porous media from macro- to nanoscale

    NASA Astrophysics Data System (ADS)

    Thiery, J.; Rodts, S.; Weitz, D. A.; Coussot, P.

    2017-07-01

    Magnetic resonance imaging visualization down to nanometric liquid films in model porous media with pore sizes from micro- to nanometers enables one to fully characterize the physical mechanisms of drying. For pore size larger than a few tens of nanometers, we identify an initial constant drying rate period, probing homogeneous desaturation, followed by a falling drying rate period. This second period is associated with the development of a gradient in saturation underneath the sample free surface that initiates the inward recession of the contact line. During this latter stage, the drying rate varies in accordance with vapor diffusion through the dry porous region, possibly affected by the Knudsen effect for small pore size. However, we show that for sufficiently small pore size and/or saturation the drying rate is increasingly reduced by the Kelvin effect. Subsequently, we demonstrate that this effect governs the kinetics of evaporation in nanopores as a homogeneous desaturation occurs. Eventually, under our experimental conditions, we show that the saturation unceasingly decreases in a homogeneous manner throughout the wet regions of the medium regardless of pore size or drying regime considered. This finding suggests the existence of continuous liquid flow towards the interface of higher evaporation, down to very low saturation or very small pore size. Paradoxically, even if this net flow is unidirectional and capillary driven, it corresponds to a series of diffused local capillary equilibrations over the full height of the sample, which might explain that a simple Darcy's law model does not predict the effect of scaling of the net flow rate on the pore size observed in our tests.

  18. Retention of Ancestral Genetic Variation Across Life-Stages of an Endangered, Long-Lived Iteroparous Fish.

    PubMed

    Carson, Evan W; Turner, Thomas F; Saltzgiver, Melody J; Adams, Deborah; Kesner, Brian R; Marsh, Paul C; Pilger, Tyler J; Dowling, Thomas E

    2016-11-01

    As with many endangered, long-lived iteroparous fishes, survival of razorback sucker depends on a management strategy that circumvents recruitment failure that results from predation by non-native fishes. In Lake Mohave, AZ-NV, management of razorback sucker centers on capture of larvae spawned in the lake, rearing them in off-channel habitats, and subsequent release ("repatriation") to the lake when adults are sufficiently large to resist predation. The effects of this strategy on genetic diversity, however, remained uncertain. After correction for differences in sample size among groups, metrics of mitochondrial DNA (mtDNA; number of haplotypes, N H , and haplotype diversity, H D ) and microsatellite (number of alleles, N A , and expected heterozygosity, H E ) diversity did not differ significantly between annual samples of repatriated adults and larval year-classes or among pooled samples of repatriated adults, larvae, and wild fish. These findings indicate that the current management program thus far maintained historical genetic variation of razorback sucker in the lake. Because effective population size, N e , is closely tied to the small census population size (N c = ~1500-3000) of razorback sucker in Lake Mohave, this population will remain at risk from genetic, as well as demographic risk of extinction unless N c is increased substantially. © The American Genetic Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. A Note on Monotonicity Assumptions for Exact Unconditional Tests in Binary Matched-pairs Designs

    PubMed Central

    Li, Xiaochun; Liu, Mengling; Goldberg, Judith D.

    2011-01-01

    Summary Exact unconditional tests have been widely applied to test the difference between two probabilities for 2×2 matched-pairs binary data with small sample size. In this context, Lloyd (2008, Biometrics 64, 716–723) proposed an E + M p-value, that showed better performance than the existing M p-value and C p-value. However, the analytical calculation of the E + M p-value requires that the Barnard convexity condition be satisfied; this can be challenging to prove theoretically. In this paper, by a simple reformulation, we show that a weaker condition, conditional monotonicity, is sufficient to calculate all three p-values (M, C and E + M) and their corresponding exact sizes. Moreover, this conditional monotonicity condition is applicable to non-inferiority tests. PMID:21466507

  20. Studies on the primary structure of short polysaccharides using SEC MALDI mass spectroscopy.

    PubMed

    Garozzo, D; Spina, E; Cozzolino, R; Cescutti, P; Fett, W F

    2000-01-12

    The introduction of size-exclusion chromatography (SEC) analysis of polysaccharides prior to MALDI mass spectroscopy accounts for the determination of the molecular mass of the repeating unit when neutral homopolymers are investigated. In the case of natural polysaccharides characterised by more complicated structural features (presence of non-carbohydrate substituents, charged groups, etc.), this mass value usually is in agreement with more than one sugar composition. Therefore, it is not sufficient to give the correct monosaccharidic composition of the polysaccharide investigated. To solve this problem, MALDI spectra were recorded on the permethylated sample and post-source decay experiments were performed on precursor ions. In this way, the composition (in terms of Hex, HexNAc, etc.), size and sequence of the repeating unit were determined.

  1. Concentration and separation of biological organisms by ultrafiltration and dielectrophoresis

    DOEpatents

    Simmons, Blake A.; Hill, Vincent R.; Fintschenko, Yolanda; Cummings, Eric B.

    2010-10-12

    Disclosed is a method for monitoring sources of public water supply for a variety of pathogens by using a combination of ultrafiltration techniques together dielectrophoretic separation techniques. Because water-borne pathogens, whether present due to "natural" contamination or intentional introduction, would likely be present in drinking water at low concentrations when samples are collected for monitoring or outbreak investigations, an approach is needed to quickly and efficiently concentrate and separate particles such as viruses, bacteria, and parasites in large volumes of water (e.g., 100 L or more) while simultaneously reducing the sample volume to levels sufficient for detecting low concentrations of microbes (e.g., <10 mL). The technique is also designed to screen the separated microbes based on specific conductivity and size.

  2. Production of staphylococcal enterotoxin A in cream-filled cake.

    PubMed

    Anunciaçao, L L; Linardi, W R; do Carmo, L S; Bergdoll, M S

    1995-07-01

    Cakes were baked with normal ingredients and filled with cream, inoculated with different size enterotoxigenic-staphylococcal inocula. Samples of the cakes were incubated at room temperature and put in the refrigerator. Samples of cake and filling were taken at different times and analyzed for staphylococcal count and presence of enterotoxin. The smaller the inoculum, the longer the time required for sufficient growth (10(6)) to occur for production of detectable enterotoxin. Enterotoxin added to the cake dough before baking (210 degrees C, 45 min) did not survive the baking. The presence of enterotoxin in the contaminated cream filling indicated this as the cause of staphylococcal food poisoning from cream-filled cakes. Refrigeration of the cakes prevented the growth of the staphylococci.

  3. Synthesis of nanoparticles in a flame aerosol reactor with independent and strict control of their size, crystal phase and morphology

    NASA Astrophysics Data System (ADS)

    Jiang, Jingkun; Chen, Da-Ren; Biswas, Pratim

    2007-07-01

    A flame aerosol reactor (FLAR) was developed to synthesize nanoparticles with desired properties (crystal phase and size) that could be independently controlled. The methodology was demonstrated for TiO2 nanoparticles, and this is the first time that large sets of samples with the same size but different crystal phases (six different ratios of anatase to rutile in this work) were synthesized. The degree of TiO2 nanoparticle agglomeration was determined by comparing the primary particle size distribution measured by scanning electron microscopy (SEM) to the mobility-based particle size distribution measured by online scanning mobility particle spectrometry (SMPS). By controlling the flame aerosol reactor conditions, both spherical unagglomerated particles and highly agglomerated particles were produced. To produce monodisperse nanoparticles, a high throughput multi-stage differential mobility analyser (MDMA) was used in series with the flame aerosol reactor. Nearly monodisperse nanoparticles (geometric standard deviation less than 1.05) could be collected in sufficient mass quantities (of the order of 10 mg) in reasonable time (1 h) that could be used in other studies such as determination of functionality or biological effects as a function of size.

  4. Raman microscopy of size-segregated aerosol particles, collected at the Sonnblick Observatory in Austria

    NASA Astrophysics Data System (ADS)

    Ofner, Johannes; Kasper-Giebl, Anneliese; Kistler, Magdalena; Matzl, Julia; Schauer, Gerhard; Hitzenberger, Regina; Lohninger, Johann; Lendl, Bernhard

    2014-05-01

    Size classified aerosol samples were collected using low pressure impactors in July 2013 at the high alpine background site Sonnnblick. The Sonnblick Observatory is located in the Austrian Alps, at the summit of Sonnblick 3100 m asl. Sampling was performed in parallel on the platform of the Observatory and after the aerosol inlet. The inlet is constructed as a whole air inlet and is operated at an overall sampling flow of 137 lpm and heated to 30 °C. Size cuts of the eight stage low pressure impactors were from 0.1 to 12.8 µm a.d.. Alumina foils were used as sample substrates for the impactor stages. In addition to the size classified aerosol sampling overall aerosol mass (Sharp Monitor 5030, Thermo Scientific) and number concentrations (TSI, CPC 3022a; TCC-3, Klotz) were determined. A Horiba LabRam 800HR Raman microscope was used for vibrational mapping of an area of about 100 µm x 100 µm of the alumina foils at a resolution of about 0.5 µm. The Raman microscope is equipped with a laser with an excitation wavelength of 532 nm and a grating with 300 gr/mm. Both optical images and the related chemical images were combined and a chemometric investigation of the combined images was done using the software package Imagelab (Epina Software Labs). Based on the well-known environment, a basic assignment of Raman signals of single particles is possible at a sufficient certainty. Main aerosol constituents e.g. like sulfates, black carbon and mineral particles could be identified. First results of the chemical imaging of size-segregated aerosol, collected at the Sonnblick Observatory, will be discussed with respect to standardized long-term measurements at the sampling station. Further, advantages and disadvantages of chemical imaging with subsequent chemometric investigation of the single images will be discussed and compared to the established methods of aerosol analysis. The chemometric analysis of the dataset is focused on mixing and variation of single compounds at different stages of the impactors.

  5. Challenges to a molecular approach to prey identification in the Burmese python, Python molurus bivittatus

    USGS Publications Warehouse

    Falk, Bryan; Reed, Robert N.

    2015-01-01

    Molecular approaches to prey identification are increasingly useful in elucidating predator–prey relationships, and we aimed to investigate the feasibility of these methods to document the species identities of prey consumed by invasive Burmese pythons in Florida. We were particularly interested in the diet of young snakes, because visual identification of prey from this size class has proven difficult. We successfully extracted DNA from the gastrointestinal contents of 43 young pythons, as well as from several control samples, and attempted amplification of DNA mini-barcodes, a 130-bp region of COX1. Using a PNA clamp to exclude python DNA, we found that prey DNA was not present in sufficient quality for amplification of this locus in 86% of our samples. All samples from the GI tracts of young pythons contained only hair, and the six samples we were able to identify to species were hispid cotton rats. This suggests that young Burmese pythons prey predominantly on small mammals and that prey diversity among snakes of this size class is low. We discuss prolonged gastrointestinal transit times and extreme gastric breakdown as possible causes of DNA degradation that limit the success of a molecular approach to prey identification in Burmese pythons

  6. A laser-deposition approach to compositional-spread discovery of materials on conventional sample sizes

    NASA Astrophysics Data System (ADS)

    Christen, Hans M.; Ohkubo, Isao; Rouleau, Christopher M.; Jellison, Gerald E., Jr.; Puretzky, Alex A.; Geohegan, David B.; Lowndes, Douglas H.

    2005-01-01

    Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (SrxBa1-xNb2O6) and magnetic perovskites (Sr1-xCaxRuO3), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.

  7. Cation solvation with quantum chemical effects modeled by a size-consistent multi-partitioning quantum mechanics/molecular mechanics method.

    PubMed

    Watanabe, Hiroshi C; Kubillus, Maximilian; Kubař, Tomáš; Stach, Robert; Mizaikoff, Boris; Ishikita, Hiroshi

    2017-07-21

    In the condensed phase, quantum chemical properties such as many-body effects and intermolecular charge fluctuations are critical determinants of the solvation structure and dynamics. Thus, a quantum mechanical (QM) molecular description is required for both solute and solvent to incorporate these properties. However, it is challenging to conduct molecular dynamics (MD) simulations for condensed systems of sufficient scale when adapting QM potentials. To overcome this problem, we recently developed the size-consistent multi-partitioning (SCMP) quantum mechanics/molecular mechanics (QM/MM) method and realized stable and accurate MD simulations, using the QM potential to a benchmark system. In the present study, as the first application of the SCMP method, we have investigated the structures and dynamics of Na + , K + , and Ca 2+ solutions based on nanosecond-scale sampling, a sampling 100-times longer than that of conventional QM-based samplings. Furthermore, we have evaluated two dynamic properties, the diffusion coefficient and difference spectra, with high statistical certainty. Furthermore the calculation of these properties has not previously been possible within the conventional QM/MM framework. Based on our analysis, we have quantitatively evaluated the quantum chemical solvation effects, which show distinct differences between the cations.

  8. Evaluating single-pass catch as a tool for identifying spatial pattern in fish distribution

    USGS Publications Warehouse

    Bateman, Douglas S.; Gresswell, Robert E.; Torgersen, Christian E.

    2005-01-01

    We evaluate the efficacy of single-pass electrofishing without blocknets as a tool for collecting spatially continuous fish distribution data in headwater streams. We compare spatial patterns in abundance, sampling effort, and length-frequency distributions from single-pass sampling of coastal cutthroat trout (Oncorhynchus clarki clarki) to data obtained from a more precise multiple-pass removal electrofishing method in two mid-sized (500–1000 ha) forested watersheds in western Oregon. Abundance estimates from single- and multiple-pass removal electrofishing were positively correlated in both watersheds, r = 0.99 and 0.86. There were no significant trends in capture probabilities at the watershed scale (P > 0.05). Moreover, among-sample variation in fish abundance was higher than within-sample error in both streams indicating that increased precision of unit-scale abundance estimates would provide less information on patterns of abundance than increasing the fraction of habitat units sampled. In the two watersheds, respectively, single-pass electrofishing captured 78 and 74% of the estimated population of cutthroat trout with 7 and 10% of the effort. At the scale of intermediate-sized watersheds, single-pass electrofishing exhibited a sufficient level of precision to be effective in detecting spatial patterns of cutthroat trout abundance and may be a useful tool for providing the context for investigating fish-habitat relationships at multiple scales.

  9. Evaluating manta ray mucus as an alternative DNA source for population genetics study: underwater-sampling, dry-storage and PCR success.

    PubMed

    Kashiwagi, Tom; Maxwell, Elisabeth A; Marshall, Andrea D; Christensen, Ana B

    2015-01-01

    Sharks and rays are increasingly being identified as high-risk species for extinction, prompting urgent assessments of their local or regional populations. Advanced genetic analyses can contribute relevant information on effective population size and connectivity among populations although acquiring sufficient regional sample sizes can be challenging. DNA is typically amplified from tissue samples which are collected by hand spears with modified biopsy punch tips. This technique is not always popular due mainly to a perception that invasive sampling might harm the rays, change their behaviour, or have a negative impact on tourism. To explore alternative methods, we evaluated the yields and PCR success of DNA template prepared from the manta ray mucus collected underwater and captured and stored on a Whatman FTA™ Elute card. The pilot study demonstrated that mucus can be effectively collected underwater using toothbrush. DNA stored on cards was found to be reliable for PCR-based population genetics studies. We successfully amplified mtDNA ND5, nuclear DNA RAG1, and microsatellite loci for all samples and confirmed sequences and genotypes being those of target species. As the yields of DNA with the tested method were low, further improvements are desirable for assays that may require larger amounts of DNA, such as population genomic studies using emerging next-gen sequencing.

  10. Elemental analysis of size-fractionated particulate matter sampled in Göteborg, Sweden

    NASA Astrophysics Data System (ADS)

    Wagner, Annemarie; Boman, Johan; Gatari, Michael J.

    2008-12-01

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Göteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 µm aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

  11. Evaluating manta ray mucus as an alternative DNA source for population genetics study: underwater-sampling, dry-storage and PCR success

    PubMed Central

    Maxwell, Elisabeth A.; Marshall, Andrea D.; Christensen, Ana B.

    2015-01-01

    Sharks and rays are increasingly being identified as high-risk species for extinction, prompting urgent assessments of their local or regional populations. Advanced genetic analyses can contribute relevant information on effective population size and connectivity among populations although acquiring sufficient regional sample sizes can be challenging. DNA is typically amplified from tissue samples which are collected by hand spears with modified biopsy punch tips. This technique is not always popular due mainly to a perception that invasive sampling might harm the rays, change their behaviour, or have a negative impact on tourism. To explore alternative methods, we evaluated the yields and PCR success of DNA template prepared from the manta ray mucus collected underwater and captured and stored on a Whatman FTA™ Elute card. The pilot study demonstrated that mucus can be effectively collected underwater using toothbrush. DNA stored on cards was found to be reliable for PCR-based population genetics studies. We successfully amplified mtDNA ND5, nuclear DNA RAG1, and microsatellite loci for all samples and confirmed sequences and genotypes being those of target species. As the yields of DNA with the tested method were low, further improvements are desirable for assays that may require larger amounts of DNA, such as population genomic studies using emerging next-gen sequencing. PMID:26413431

  12. Low energy cyclotron for radiocarbon dating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welch, J.J.

    1984-12-01

    The measurement of naturally occurring radioisotopes whose half lives are less than a few hundred million years but more than a few years provides information about the temporal behavior of geologic and climatic processes, the temporal history of meteoritic bodies as well as the production mechanisms of these radioisotopes. A new extremely sensitive technique for measuring these radioisotopes at tandem Van de Graaff and cyclotron facilities has been very successful though the high cost and limited availability have been discouraging. We have built and tested a low energy cyclotron for radiocarbon dating similar in size to a conventional mass spectrometer.more » These tests clearly show that with the addition of a conventional ion source, the low energy cyclotron can perform the extremely high sensitivity /sup 14/C measurements that are now done at accelerator facilities. We found that no significant background is present when the cyclotron is tuned to accelerate /sup 14/C negative ions and the transmission efficiency is adequate to perform radiocarbon dating on milligram samples of carbon. The internal ion source used did not produce sufficient current to detect /sup 14/C directly at modern concentrations. We show how a conventional carbon negative ion source, located outside the cyclotron magnet, would produce sufficient beam and provide for quick sampling to make radiocarbon dating milligram samples with a modest laboratory instrument feasible.« less

  13. A sampling strategy for promoting and assessing medical student retention of physical examination skills.

    PubMed

    Williams, Reed G; Klamen, Debra L; Mayer, David; Valaski, Maureen; Roberts, Nicole K

    2007-10-01

    Skill acquisition and maintenance requires spaced deliberate practice. Assessing medical students' physical examination performance ability is resource intensive. The authors assessed the nature and size of physical examination performance samples necessary to accurately estimate total physical examination skill. Physical examination assessment data were analyzed from second year students at the University of Illinois College of Medicine at Chicago in 2002, 2003, and 2004 (N = 548). Scores on subgroups of physical exam maneuvers were compared with scores on the total physical exam, to identify sound predictors of total test performance. Five exam subcomponents were sufficiently correlated to overall test performance and provided adequate sensitivity and specificity to serve as a means to prompt continued student review and rehearsal of physical examination technical skills. Selection and administration of samples of the total physical exam provide a resource-saving approach for promoting and estimating overall physical examination skills retention.

  14. Nanomachining by rubbing at ultrasonic frequency under controlled shear force.

    PubMed

    Muraoka, Mikio

    2011-03-01

    This study proposes a new method of proximal-probe machining that uses a rubbing process by introducing concentrated-mass (CM) cantilevers. At the second resonance of the CM cantilever vibration, the tip site of the cantilever becomes a node of the standing deflection wave because of the sufficient inertia of the attached concentrated mass. The tip makes a cyclic motion that is tangential to the sample surface, not vertical to it, as in a tapping motion. This lateral tip motion that is selectively excited by CM cantilevers was effective for the material modification of a sample due to the friction between the tip and the sample. Imaging and nanomachining under controlled shear force were demonstrated by means of the modified cantilever and a normal atomic force microscope. We were able to write a micron-sized letter "Z" having a line width of 30-100 nm on a resin surface.

  15. Pediatric anthropometrics are inconsistent with current guidelines for assessing rider fit on all-terrain vehicles.

    PubMed

    Bernard, Andrew C; Mullineaux, David R; Auxier, James T; Forman, Jennifer L; Shapiro, Robert; Pienkowski, David

    2010-07-01

    This study sought to establish objective anthropometric measures of fit or misfit for young riders on adult and youth-sized all-terrain vehicles and use these metrics to test the unproved historical reasoning that age alone is a sufficient measure of rider-ATV fit. Male children (6-11 years, n=8; and 12-15 years, n=11) were selected by convenience sampling. Rider-ATV fit was quantified by five measures adapted from published recommendations: (1) standing-seat clearance, (2) hand size, (3) foot vs. foot-brake position, (4) elbow angle, and (5) handlebar-to-knee distance. Youths aged 12-15 years fit the adult-sized ATV better than the ATV Safety Institute recommended age-appropriate youth model (63% of subjects fit all 5 measures on adult-sized ATV vs. 20% on youth-sized ATV). Youths aged 6-11 years fit poorly on ATVs of both sizes (0% fit all 5 parameters on the adult-sized ATV vs 12% on the youth-sized ATV). The ATV Safety Institute recommends rider-ATV fit according to age and engine displacement, but no objective data linking age or anthropometrics with ATV engine or frame size has been previously published. Age alone is a poor predictor of rider-ATV fit; the five metrics used offer an improvement compared to current recommendations. Copyright 2010 Elsevier Ltd. All rights reserved.

  16. Improved silicon nitride for advanced heat engines

    NASA Technical Reports Server (NTRS)

    Yeh, H. C.; Wimmer, J. M.; Huang, H. H.; Rorabaugh, M. E.; Schienle, J.; Styhr, K. H.

    1985-01-01

    The AiResearch Casting Company baseline silicon nitride (92 percent GTE SN-502 Si sub 3 N sub 4 plus 6 percent Y sub 2 O sub 3 plus 2 percent Al sub 2 O sub 3) was characterized with methods that included chemical analysis, oxygen content determination, electrophoresis, particle size distribution analysis, surface area determination, and analysis of the degree of agglomeration and maximum particle size of elutriated powder. Test bars were injection molded and processed through sintering at 0.68 MPa (100 psi) of nitrogen. The as-sintered test bars were evaluated by X-ray phase analysis, room and elevated temperature modulus of rupture strength, Weibull modulus, stress rupture, strength after oxidation, fracture origins, microstructure, and density from quantities of samples sufficiently large to generate statistically valid results. A series of small test matrices were conducted to study the effects and interactions of processing parameters which included raw materials, binder systems, binder removal cycles, injection molding temperatures, particle size distribution, sintering additives, and sintering cycle parameters.

  17. Many Molecular Properties from One Kernel in Chemical Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    We introduce property-independent kernels for machine learning modeling of arbitrarily many molecular properties. The kernels encode molecular structures for training sets of varying size, as well as similarity measures sufficiently diffuse in chemical space to sample over all training molecules. Corresponding molecular reference properties provided, they enable the instantaneous generation of ML models which can systematically be improved through the addition of more data. This idea is exemplified for single kernel based modeling of internal energy, enthalpy, free energy, heat capacity, polarizability, electronic spread, zero-point vibrational energy, energies of frontier orbitals, HOMOLUMO gap, and the highest fundamental vibrational wavenumber. Modelsmore » of these properties are trained and tested using 112 kilo organic molecules of similar size. Resulting models are discussed as well as the kernels’ use for generating and using other property models.« less

  18. Copper Nanoparticles: Synthesis and Biological Activity

    NASA Astrophysics Data System (ADS)

    Satyvaldiev, A. S.; Zhasnakunov, Z. K.; Omurzak, E.; Doolotkeldieva, T. D.; Bobusheva, S. T.; Orozmatova, G. T.; Kelgenbaeva, Z.

    2018-01-01

    By means of XRD and FESEM analysis, it is established that copper nanoparticles with sizes less than 10 nm are formed during the chemical reduction, which form aggregates mainly with spherical shape. Presence of gelatin during the chemical reduction of copper induced formation of smaller size distribution nanoparticles than that of nanoparticles synthesized without gelatin and it can be related to formation of protective layer. Synthesized Cu nano-powders have sufficiently high activity against the Erwinia amylovora bacterium, and the bacterial growth inhibition depends on the Cu nanoparticles concentration. At a concentration of 5 mg / ml of Cu nanoparticles, the exciter growth inhibition zone reaches a maximum value within 72 hours and the lysis zone is 20 mm, and at a concentration of 1 mg / ml this value is 16 mm, which also indicates the significant antibacterial activity of this sample.

  19. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Global preamplification simplifies targeted mRNA quantification

    PubMed Central

    Kroneis, Thomas; Jonasson, Emma; Andersson, Daniel; Dolatabadi, Soheila; Ståhlberg, Anders

    2017-01-01

    The need to perform gene expression profiling using next generation sequencing and quantitative real-time PCR (qPCR) on small sample sizes and single cells is rapidly expanding. However, to analyse few molecules, preamplification is required. Here, we studied global and target-specific preamplification using 96 optimised qPCR assays. To evaluate the preamplification strategies, we monitored the reactions in real-time using SYBR Green I detection chemistry followed by melting curve analysis. Next, we compared yield and reproducibility of global preamplification to that of target-specific preamplification by qPCR using the same amount of total RNA. Global preamplification generated 9.3-fold lower yield and 1.6-fold lower reproducibility than target-specific preamplification. However, the performance of global preamplification is sufficient for most downstream applications and offers several advantages over target-specific preamplification. To demonstrate the potential of global preamplification we analysed the expression of 15 genes in 60 single cells. In conclusion, we show that global preamplification simplifies targeted gene expression profiling of small sample sizes by a flexible workflow. We outline the pros and cons for global preamplification compared to target-specific preamplification. PMID:28332609

  1. Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.; Ralph, C. John; Sauer, John R.; Droege, Sam

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits. Although no interaction was detected between number of points and number of visits, when paired reciprocals were compared, more points invariably yielded a significantly greater cumulative number of species than more visits to a point. Still, 36 point counts per stand during each of two breeding seasons detected only 52 percent of the known available species pool in DEF.

  2. Method for concentration and separation of biological organisms by ultrafiltration and dielectrophoresis

    DOEpatents

    Simmons, Blake A.; Hill, Vincent R.; Fintschenko, Yolanda; Cummings, Eric B.

    2012-09-04

    Disclosed is a method for monitoring sources of public water supply for a variety of pathogens by using a combination of ultrafiltration techniques together dielectrophoretic separation techniques. Because water-borne pathogens, whether present due to "natural" contamination or intentional introduction, would likely be present in drinking water at low concentrations when samples are collected for monitoring or outbreak investigations, an approach is needed to quickly and efficiently concentrate and separate particles such as viruses, bacteria, and parasites in large volumes of water (e.g., 100 L or more) while simultaneously reducing the sample volume to levels sufficient for detecting low concentrations of microbes (e.g., <10 mL). The technique is also designed to screen the separated microbes based on specific conductivity and size.

  3. Hydrodynamic Electron Flow and Hall Viscosity

    NASA Astrophysics Data System (ADS)

    Scaffidi, Thomas; Moll, Philip; Kushwaha, Pallavi; Nandi, Nabhanila; Schmidt, Burkhard; MacKenzie, Andrew; Moore, Joel

    In metallic samples of small enough size and sufficiently strong electron-electron scattering, the viscosity of the electron gas can become the dominant process governing transport. In this regime, momentum is a long-lived quantity whose evolution is described by an emergent hydrodynamical theory for which bounds on diffusion were conjectured based on an holographic correspondence. Furthermore, breaking time-reversal symmetry can lead to the appearance of an odd component to the viscosity called the Hall viscosity which has attracted a lot of attention recently due to its quantized nature in gapped systems but still eludes experimental confirmation. Based on microscopic calculations, we discuss how to measure the effects of both the even and odd components of the viscosity using hydrodynamic electronic transport in mesoscopic samples under applied magnetic fields. Gordon and Betty Moore Foundation.

  4. Study of Evaporation Rate of Water in Hydrophobic Confinement using Forward Flux Sampling

    NASA Astrophysics Data System (ADS)

    Sharma, Sumit; Debenedetti, Pablo G.

    2012-02-01

    Drying of hydrophobic cavities is of interest in understanding biological self assembly, protein stability and opening and closing of ion channels. Liquid-to-vapor transition of water in confinement is associated with large kinetic barriers which preclude its study using conventional simulation techniques. Using forward flux sampling to study the kinetics of the transition between two hydrophobic surfaces, we show that a) the free energy barriers to evaporation scale linearly with the distance between the two surfaces, d; b) the evaporation rates increase as the lateral size of the surfaces, L increases, and c) the transition state to evaporation for sufficiently large L is a cylindrical vapor cavity connecting the two hydrophobic surfaces. Finally, we decouple the effects of confinement geometry and surface chemistry on the evaporation rates.

  5. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    PubMed

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the measurement of hours worked each week by GPs strongly varied according to the number of GPs included and the frequency of measurements per GP during the week measured. The best balance between both dimensions will depend upon different circumstances, such as the target group and the budget available.

  6. Evaluating information content of SNPs for sample-tagging in re-sequencing projects.

    PubMed

    Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F

    2015-05-15

    Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.

  7. Risk of bias reporting in the recent animal focal cerebral ischaemia literature.

    PubMed

    Bahor, Zsanett; Liao, Jing; Macleod, Malcolm R; Bannach-Brown, Alexandra; McCann, Sarah K; Wever, Kimberley E; Thomas, James; Ottavi, Thomas; Howells, David W; Rice, Andrew; Ananiadou, Sophia; Sena, Emily

    2017-10-15

    Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (sample size calculation); and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (sample size calculation). There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a sample size calculation were reported. © 2017 The Author(s).

  8. Value of information methods to design a clinical trial in a small population to optimise a health economic utility function.

    PubMed

    Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel

    2018-02-08

    Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Keane; Xiao-Chun Shi; Tong-man Ong

    The project staff partnered with Costas Sioutas from the University of Southern California to apply the VACES (Versatile Aerosol Concentration Enhancement System) to a diesel engine test facility at West Virginia University Department of Mechanical Engineering and later the NIOSH Lake Lynn Mine facility. The VACES system was able to allow diesel exhaust particulate matter (DPM) to grow to sufficient particle size to be efficiently collected with the SKC Biosampler impinger device, directly into a suspension of simulated pulmonary surfactant. At the WVU-MAE facility, the concentration of the aerosol was too high to allow efficient use of the VACES concentrationmore » enhancement, although aerosol collection was successful. Collection at the LLL was excellent with the diluted exhaust stream. In excess of 50 samples were collected at the LLL facility, along with matching filter samples, at multiple engine speed and load conditions. Replicate samples were combined and concentration increased using a centrifugal concentrator. Bioassays were negative for all tested samples, but this is believed to be due to insufficient concentration in the final assay suspensions.« less

  10. State-of-the-art practices in farmland biodiversity monitoring for North America and Europe.

    PubMed

    Herzog, Felix; Franklin, Janet

    2016-12-01

    Policy makers and farmers need to know the status of farmland biodiversity in order to meet conservation goals and evaluate management options. Based on a review of 11 monitoring programs in Europe and North America and on related literature, we identify the design choices or attributes of a program that balance monitoring costs and usefulness for stakeholders. A useful program monitors habitats, vascular plants, and possibly faunal groups (ecosystem service providers, charismatic species) using a stratified random sample of the agricultural landscape, including marginal and intensive regions. The size of landscape samples varies with the grain of the agricultural landscape; for example, samples are smaller in Europe and larger in North America. Raw data are collected in a rolling survey, which distributes sampling over several years. Sufficient practical experience is now available to implement broad monitoring schemes on both continents. Technological developments in remote sensing, metagenomics, and social media may offer new opportunities for affordable farmland biodiversity monitoring and help to lower the overall costs of monitoring programs.

  11. Improving the analysis of composite endpoints in rare disease trials.

    PubMed

    McMenamin, Martina; Berglind, Anna; Wason, James M S

    2018-05-22

    Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.

  12. 30 CFR 75.513-1 - Electric conductor; size.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Electric conductor; size. 75.513-1 Section 75.513-1 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY... Electric conductor; size. An electric conductor is not of sufficient size to have adequate carrying...

  13. 30 CFR 75.513-1 - Electric conductor; size.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Electric conductor; size. 75.513-1 Section 75.513-1 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY... Electric conductor; size. An electric conductor is not of sufficient size to have adequate carrying...

  14. 30 CFR 75.513-1 - Electric conductor; size.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Electric conductor; size. 75.513-1 Section 75... AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Electrical Equipment-General § 75.513-1 Electric conductor; size. An electric conductor is not of sufficient size to have adequate carrying...

  15. 30 CFR 75.513-1 - Electric conductor; size.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Electric conductor; size. 75.513-1 Section 75... AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Electrical Equipment-General § 75.513-1 Electric conductor; size. An electric conductor is not of sufficient size to have adequate carrying...

  16. 30 CFR 75.513-1 - Electric conductor; size.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Electric conductor; size. 75.513-1 Section 75... AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Electrical Equipment-General § 75.513-1 Electric conductor; size. An electric conductor is not of sufficient size to have adequate carrying...

  17. Characterizations of linear sufficient statistics

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.

    1977-01-01

    A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.

  18. Flex-rigid pleuroscopic biopsy with the SB knife Jr is a novel technique for diagnosis of malignant or benign fibrothorax.

    PubMed

    Wang, Xiao-Bo; Yin, Yan; Miao, Yuan; Eberhardt, Ralf; Hou, Gang; Herth, Felix J; Kang, Jian

    2016-11-01

    Diagnosing pleural effusion is challenging, especially in patients with malignant or benign fibrothorax, which is difficult to sample using standard flexible forceps (SFF) via flex-rigid pleuroscopy. An adequate sample is crucial for the differential diagnosis of malignant fibrothorax (malignant pleural mesothelioma, metastatic lung carcinoma, etc.) from benign fibrothorax (benign asbestos pleural disease, tuberculous pleuritis, etc.). Novel biopsy techniques are required in flex-rigid pleuroscopy to improve the sample size and quality. The SB knife Jr, which is a scissor forceps that uses a mono-pole high frequency, was developed to allow convenient and accurate resection of larger lesions during endoscopic dissection (ESD). Herein, we report two patients with fibrothorax who underwent a pleural biopsy using an SB knife Jr to investigate the potential use of this tool in flex-rigid pleuroscopy when pleural lesions are difficult to biopsy via SFF. The biopsies were successful, with sufficient size and quality for definitive diagnosis. We also successfully performed adhesiolysis with the SB knife Jr in one case, and adequate biopsies were conducted. No complications were observed. Electrosurgical biopsy with the SB knife Jr during flex-rigid pleuroscopy allowed us to obtain adequate samples for the diagnosis of malignant versus benign fibrothorax, which is usually not possible with SFF. The SB knife Jr also demonstrated a potential use for pleuropulmonary adhesions.

  19. Design of pilot studies to inform the construction of composite outcome measures.

    PubMed

    Edland, Steven D; Ard, M Colin; Li, Weiwei; Jiang, Lingjing

    2017-06-01

    Composite scales have recently been proposed as outcome measures for clinical trials. For example, the Prodromal Alzheimer's Cognitive Composite (PACC) is the sum of z-score normed component measures assessing episodic memory, timed executive function, and global cognition. Alternative methods of calculating composite total scores using the weighted sum of the component measures that maximize signal-to-noise of the resulting composite score have been proposed. Optimal weights can be estimated from pilot data, but it is an open question how large a pilot trial is required to calculate reliably optimal weights. In this manuscript, we describe the calculation of optimal weights, and use large-scale computer simulations to investigate the question of how large a pilot study sample is required to inform the calculation of optimal weights. The simulations are informed by the pattern of decline observed in cognitively normal subjects enrolled in the Alzheimer's Disease Cooperative Study (ADCS) Prevention Instrument cohort study, restricting to n=75 subjects age 75 and over with an ApoE E4 risk allele and therefore likely to have an underlying Alzheimer neurodegenerative process. In the context of secondary prevention trials in Alzheimer's disease, and using the components of the PACC, we found that pilot studies as small as 100 are sufficient to meaningfully inform weighting parameters. Regardless of the pilot study sample size used to inform weights, the optimally weighted PACC consistently outperformed the standard PACC in terms of statistical power to detect treatment effects in a clinical trial. Pilot studies of size 300 produced weights that achieved near-optimal statistical power, and reduced required sample size relative to the standard PACC by more than half. These simulations suggest that modestly sized pilot studies, comparable to that of a phase 2 clinical trial, are sufficient to inform the construction of composite outcome measures. Although these findings apply only to the PACC in the context of prodromal AD, the observation that weights only have to approximate the optimal weights to achieve near-optimal performance should generalize. Performing a pilot study or phase 2 trial to inform the weighting of proposed composite outcome measures is highly cost-effective. The net effect of more efficient outcome measures is that smaller trials will be required to test novel treatments. Alternatively, second generation trials can use prior clinical trial data to inform weighting, so that greater efficiency can be achieved as we move forward.

  20. Normative spleen size in tall healthy athletes: implications for safe return to contact sports after infectious mononucleosis.

    PubMed

    McCorkle, Ryan; Thomas, Brittany; Suffaletto, Heidi; Jehle, Dietrich

    2010-11-01

    To establish normative parameters of the spleen by ultrasonography in tall athletes. Prospective cohort observational study. University of Buffalo, Erie County Community College, University of Texas at Tyler, and Austin College. Sixty-six athletes enrolled and finished the study. Height requirements were at least 6 feet 2 inches for men and at least 5 feet 7 inches in women. Measurement of spleen size in tall athletes. Ultrasound measurements of spleen size in tall athletes were compared with "normal-sized" controls from the literature. Mean, SD, and variance determined the sample distribution, and a one sample t test compared measurements in tall athletes with historical measurements in the average height population. Statistical significance was defined as P < 0.05. Mean height was 192.26 cm (SD, ± 6.52) for men and 176.54 cm (SD, ± 5.19) for women. Mean splenic measurements for all subjects were 12.19 cm (SD, ± 1.45) for spleen length, 8.88 cm (SD, ± 0.96) for spleen width, and 5.55 cm (SD, ± 0.76) for spleen thickness. The study mean for spleen length was 12.192 cm (95% confidence interval, 11.835-12.549) and population mean was 8.94 cm (2 tailed t test, P < 0.01). In this population of tall athletes, normal spleen size was significantly larger than the normal spleen size of an average individual. In the clinical arena, it can be difficult to know when the tall athletes with splenomegaly from infectious mononucleosis can safely return to contact sports. Previously, there has not been a sufficient "norm" for this population, but this study helps to establish baseline values.

  1. Spatial Sampling of Weather Data for Regional Crop Yield Simulations

    NASA Technical Reports Server (NTRS)

    Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian; hide

    2016-01-01

    Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management data for regional simulations of crop yields is still needed.

  2. Digital Archiving of People Flow by Recycling Large-Scale Social Survey Data of Developing Cities

    NASA Astrophysics Data System (ADS)

    Sekimoto, Y.; Watanabe, A.; Nakamura, T.; Horanont, T.

    2012-07-01

    Data on people flow has become increasingly important in the field of business, including the areas of marketing and public services. Although mobile phones enable a person's position to be located to a certain degree, it is a challenge to acquire sufficient data from people with mobile phones. In order to grasp people flow in its entirety, it is important to establish a practical method of reconstructing people flow from various kinds of existing fragmentary spatio-temporal data such as social survey data. For example, despite typical Person Trip Survey Data collected by the public sector showing the fragmentary spatio-temporal positions accessed, the data are attractive given the sufficiently large sample size to estimate the entire flow of people. In this study, we apply our proposed basic method to Japan International Cooperation Agency (JICA) PT data pertaining to developing cities around the world, and we propose some correction methods to resolve the difficulties in applying it to many cities and stably to infrastructure data.

  3. How many stakes are required to measure the mass balance of a glacier?

    USGS Publications Warehouse

    Fountain, A.G.; Vecchia, A.

    1999-01-01

    Glacier mass balance is estimated for South Cascade Glacier and Maclure Glacier using a one-dimensional regression of mass balance with altitude as an alternative to the traditional approach of contouring mass balance values. One attractive feature of regression is that it can be applied to sparse data sets where contouring is not possible and can provide an objective error of the resulting estimate. Regression methods yielded mass balance values equivalent to contouring methods. The effect of the number of mass balance measurements on the final value for the glacier showed that sample sizes as small as five stakes provided reasonable estimates, although the error estimates were greater than for larger sample sizes. Different spatial patterns of measurement locations showed no appreciable influence on the final value as long as different surface altitudes were intermittently sampled over the altitude range of the glacier. Two different regression equations were examined, a quadratic, and a piecewise linear spline, and comparison of results showed little sensitivity to the type of equation. These results point to the dominant effect of the gradient of mass balance with altitude of alpine glaciers compared to transverse variations. The number of mass balance measurements required to determine the glacier balance appears to be scale invariant for small glaciers and five to ten stakes are sufficient.

  4. Freeway travel speed calculation model based on ETC transaction data.

    PubMed

    Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang

    2014-01-01

    Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers.

  5. Performance Analysis of Motion-Sensor Behavior for User Authentication on Smartphones

    PubMed Central

    Shen, Chao; Yu, Tianwen; Yuan, Sheng; Li, Yunpeng; Guan, Xiaohong

    2016-01-01

    The growing trend of using smartphones as personal computing platforms to access and store private information has stressed the demand for secure and usable authentication mechanisms. This paper investigates the feasibility and applicability of using motion-sensor behavior data for user authentication on smartphones. For each sample of the passcode, sensory data from motion sensors are analyzed to extract descriptive and intensive features for accurate and fine-grained characterization of users’ passcode-input actions. One-class learning methods are applied to the feature space for performing user authentication. Analyses are conducted using data from 48 participants with 129,621 passcode samples across various operational scenarios and different types of smartphones. Extensive experiments are included to examine the efficacy of the proposed approach, which achieves a false-rejection rate of 6.85% and a false-acceptance rate of 5.01%. Additional experiments on usability with respect to passcode length, sensitivity with respect to training sample size, scalability with respect to number of users, and flexibility with respect to screen size were provided to further explore the effectiveness and practicability. The results suggest that sensory data could provide useful authentication information, and this level of performance approaches sufficiency for two-factor authentication on smartphones. Our dataset is publicly available to facilitate future research. PMID:27005626

  6. Performance Analysis of Motion-Sensor Behavior for User Authentication on Smartphones.

    PubMed

    Shen, Chao; Yu, Tianwen; Yuan, Sheng; Li, Yunpeng; Guan, Xiaohong

    2016-03-09

    The growing trend of using smartphones as personal computing platforms to access and store private information has stressed the demand for secure and usable authentication mechanisms. This paper investigates the feasibility and applicability of using motion-sensor behavior data for user authentication on smartphones. For each sample of the passcode, sensory data from motion sensors are analyzed to extract descriptive and intensive features for accurate and fine-grained characterization of users' passcode-input actions. One-class learning methods are applied to the feature space for performing user authentication. Analyses are conducted using data from 48 participants with 129,621 passcode samples across various operational scenarios and different types of smartphones. Extensive experiments are included to examine the efficacy of the proposed approach, which achieves a false-rejection rate of 6.85% and a false-acceptance rate of 5.01%. Additional experiments on usability with respect to passcode length, sensitivity with respect to training sample size, scalability with respect to number of users, and flexibility with respect to screen size were provided to further explore the effectiveness and practicability. The results suggest that sensory data could provide useful authentication information, and this level of performance approaches sufficiency for two-factor authentication on smartphones. Our dataset is publicly available to facilitate future research.

  7. Evaluation of errors in quantitative determination of asbestos in rock

    NASA Astrophysics Data System (ADS)

    Baietto, Oliviero; Marini, Paola; Vitaliti, Martina

    2016-04-01

    The quantitative determination of the content of asbestos in rock matrices is a complex operation which is susceptible to important errors. The principal methodologies for the analysis are Scanning Electron Microscopy (SEM) and Phase Contrast Optical Microscopy (PCOM). Despite the PCOM resolution is inferior to that of SEM, PCOM analysis has several advantages, including more representativity of the analyzed sample, more effective recognition of chrysotile and a lower cost. The DIATI LAA internal methodology for the analysis in PCOM is based on a mild grinding of a rock sample, its subdivision in 5-6 grain size classes smaller than 2 mm and a subsequent microscopic analysis of a portion of each class. The PCOM is based on the optical properties of asbestos and of the liquids with note refractive index in which the particles in analysis are immersed. The error evaluation in the analysis of rock samples, contrary to the analysis of airborne filters, cannot be based on a statistical distribution. In fact for airborne filters a binomial distribution (Poisson), which theoretically defines the variation in the count of fibers resulting from the observation of analysis fields, chosen randomly on the filter, can be applied. The analysis in rock matrices instead cannot lean on any statistical distribution because the most important object of the analysis is the size of the of asbestiform fibers and bundles of fibers observed and the resulting relationship between the weights of the fibrous component compared to the one granular. The error evaluation generally provided by public and private institutions varies between 50 and 150 percent, but there are not, however, specific studies that discuss the origin of the error or that link it to the asbestos content. Our work aims to provide a reliable estimation of the error in relation to the applied methodologies and to the total content of asbestos, especially for the values close to the legal limits. The error assessments must be made through the repetition of the same analysis on the same sample to try to estimate the error on the representativeness of the sample and the error related to the sensitivity of the operator, in order to provide a sufficiently reliable uncertainty of the method. We used about 30 natural rock samples with different asbestos content, performing 3 analysis on each sample to obtain a trend sufficiently representative of the percentage. Furthermore we made on one chosen sample 10 repetition of the analysis to try to define more specifically the error of the methodology.

  8. MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes

    NASA Technical Reports Server (NTRS)

    Allen, Carlton C.

    2011-01-01

    The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by mass of each sample should be preserved to support future scientific investigations. Samples of 15-16 grams are considered optimal. The total mass of returned rocks, soils, blanks and standards should be approximately 500 grams. Atmospheric gas samples should be the equivalent of 50 cubic cm at 20 times Mars ambient atmospheric pressure.

  9. Results of a comprehensive atmospheric aerosol-radiation experiment in the southwestern United States. I - Size distribution, extinction optical depth and vertical profiles of aerosols suspended in the atmosphere. II - Radiation flux measurements and

    NASA Technical Reports Server (NTRS)

    Deluisi, J. J.; Furukawa, F. M.; Gillette, D. A.; Schuster, B. G.; Charlson, R. J.; Porch, W. M.; Fegley, R. W.; Herman, B. M.; Rabinoff, R. A.; Twitty, J. T.

    1976-01-01

    Results are reported for a field test that was aimed at acquiring a sufficient set of measurements of aerosol properties required as input for radiative-transfer calculations relevant to the earth's radiation balance. These measurements include aerosol extinction and size distributions, vertical profiles of aerosols, and radiation fluxes. Physically consistent, vertically inhomogeneous models of the aerosol characteristics of a turbid atmosphere over a desert and an agricultural region are constructed by using direct and indirect sampling techniques. These results are applied for a theoretical interpretation of airborne radiation-flux measurements. The absorption term of the complex refractive index of aerosols is estimated, a regional variation in the refractive index is noted, and the magnitude of solar-radiation absorption by aerosols and atmospheric molecules is determined.

  10. A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches

    NASA Astrophysics Data System (ADS)

    Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul

    2018-07-01

    Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.

  11. PKMζ is necessary and sufficient for synaptic clustering of PSD-95.

    PubMed

    Shao, Charles Y; Sondhi, Rachna; van de Nes, Paula S; Sacktor, Todd Charlton

    2012-07-01

    The persistent activity of protein kinase Mzeta (PKMζ), a brain-specific, constitutively active protein kinase C isoform, maintains synaptic long-term potentiation (LTP). Structural remodeling of the postsynaptic density is believed to contribute to the expression of LTP. We therefore examined the role of PKMζ in reconfiguring PSD-95, the major postsynaptic scaffolding protein at excitatory synapses. In primary cultures of hippocampal neurons, PKMζ activity was critical for increasing the size of PSD-95 clusters during chemical LTP (cLTP). Increasing PKMζ activity by overexpressing the kinase in hippocampal neurons was sufficient to increase PSD-95 cluster size, spine size, and postsynaptic AMPAR subunit GluA2. Overexpression of an inactive mutant of PKMζ did not increase PSD-95 clustering, and applications of the ζ-pseudosubstrate inhibitor ZIP reversed the PKMζ-mediated increases in PSD-95 clustering, indicating that the activity of PKMζ is necessary to induce and maintain the increased size of PSD-95 clusters. Thus the persistent activity of PKMζ is both necessary and sufficient for maintaining increases of PSD-95 clusters, providing a unified mechanism for long-term functional and structural modifications of synapses. Copyright © 2011 Wiley Periodicals, Inc.

  12. Precipitation Model Validation in 3rd Generation Aeroturbine Disc Alloys

    NASA Technical Reports Server (NTRS)

    Olson, G. B.; Jou, H.-J.; Jung, J.; Sebastian, J. T.; Misra, A.; Locci, I.; Hull, D.

    2008-01-01

    In support of application of the DARPA-AIM methodology to the accelerated hybrid thermal process optimization of 3rd generation aeroturbine disc alloys with quantified uncertainty, equilibrium and diffusion couple experiments have identified available fundamental thermodynamic and mobility databases of sufficient accuracy. Using coherent interfacial energies quantified by Single-Sensor DTA nucleation undercooling measurements, PrecipiCalc(TM) simulations of nonisothermal precipitation in both supersolvus and subsolvus treated samples show good agreement with measured gamma particle sizes and compositions. Observed longterm isothermal coarsening behavior defines requirements for further refinement of elastic misfit energy and treatment of the parallel evolution of incoherent precipitation at grain boundaries.

  13. Sol-gel derived sorbents

    DOEpatents

    Sigman, Michael E.; Dindal, Amy B.

    2003-11-11

    Described is a method for producing copolymerized sol-gel derived sorbent particles for the production of copolymerized sol-gel derived sorbent material. The method for producing copolymerized sol-gel derived sorbent particles comprises adding a basic solution to an aqueous metal alkoxide mixture for a pH.ltoreq.8 to hydrolyze the metal alkoxides. Then, allowing the mixture to react at room temperature for a precalculated period of time for the mixture to undergo an increased in viscosity to obtain a desired pore size and surface area. The copolymerized mixture is then added to an immiscible, nonpolar solvent that has been heated to a sufficient temperature wherein the copolymerized mixture forms a solid upon the addition. The solid is recovered from the mixture, and is ready for use in an active sampling trap or activated for use in a passive sampling trap.

  14. A non-invasive technique to bleed incubating birds without trapping: A blood-sucking bug in a hollow egg

    USGS Publications Warehouse

    Becker, P.H.; Voigt, C.C.; Arnold, J.M.; Nagel, R.

    2006-01-01

    We describe a non-invasive technique to obtain blood samples from incubating birds without trapping and handling. A larval instar of the blood-sucking bug Dipetalogaster maximus (Heteroptera) was put in a hollowed artificial egg which was placed in a common tern Sterna hirundo) nest. A gauze-covered hole in the egg allowed the bug to draw blood from the brood patch of breeding adults. We successfully collected 68 blood samples of sufficient amount (median=187 ??l). The daily success rate was highest during the early breeding season and averaged 34% for all trials. We could not detect any visible response by the incubating bird to the sting of the bug. This technique allows for non-invasive blood collection from bird species of various sizes without disturbance. ?? Dt. Ornithologen-Gesellschaft e.V. 2005.

  15. Discovery sequence and the nature of low permeability gas accumulations

    USGS Publications Warehouse

    Attanasi, E.D.

    2005-01-01

    There is an ongoing discussion regarding the geologic nature of accumulations that host gas in low-permeability sandstone environments. This note examines the discovery sequence of the accumulations in low permeability sandstone plays that were classified as continuous-type by the U.S. Geological Survey for the 1995 National Oil and Gas Assessment. It compares the statistical character of historical discovery sequences of accumulations associated with continuous-type sandstone gas plays to those of conventional plays. The seven sandstone plays with sufficient data exhibit declining size with sequence order, on average, and in three of the seven the trend is statistically significant. Simulation experiments show that both a skewed endowment size distribution and a discovery process that mimics sampling proportional to size are necessary to generate a discovery sequence that consistently produces a statistically significant negative size order relationship. The empirical findings suggest that discovery sequence could be used to constrain assessed gas in untested areas. The plays examined represent 134 of the 265 trillion cubic feet of recoverable gas assessed in undeveloped areas of continuous-type gas plays in low permeability sandstone environments reported in the 1995 National Assessment. ?? 2005 International Association for Mathematical Geology.

  16. Non-terminal blood sampling techniques in guinea pigs.

    PubMed

    Birck, Malene M; Tveden-Nyborg, Pernille; Lindblad, Maiken M; Lykkesfeldt, Jens

    2014-10-11

    Guinea pigs possess several biological similarities to humans and are validated experimental animal models(1-3). However, the use of guinea pigs currently represents a relatively narrow area of research and descriptive data on specific methodology is correspondingly scarce. The anatomical features of guinea pigs are slightly different from other rodent models, hence modulation of sampling techniques to accommodate for species-specific differences, e.g., compared to mice and rats, are necessary to obtain sufficient and high quality samples. As both long and short term in vivo studies often require repeated blood sampling the choice of technique should be well considered in order to reduce stress and discomfort in the animals but also to ensure survival as well as compliance with requirements of sample size and accessibility. Venous blood samples can be obtained at a number of sites in guinea pigs e.g., the saphenous and jugular veins, each technique containing both advantages and disadvantages(4,5). Here, we present four different blood sampling techniques for either conscious or anaesthetized guinea pigs. The procedures are all non-terminal procedures provided that sample volumes and number of samples do not exceed guidelines for blood collection in laboratory animals(6). All the described methods have been thoroughly tested and applied for repeated in vivo blood sampling in studies within our research facility.

  17. Pharmaceutical production of tableting granules in an ultra-small-scale high-shear granulator as a pre-formulation study.

    PubMed

    Ogawa, Tatsuya; Uchino, Tomohiro; Takahashi, Daisuke; Izumi, Tsuyoshi; Otsuka, Makoto

    2012-11-01

    In some of drug developments, the amount of bulk drug powder to use in early stages is limited and it is not easy to supply a sufficient drug amount for conventional preparation methods. Therefore, an ultra-small-scale high-shear granulator (less than 5 g) (USG) was developed and applied to small-scale granulation as a pre-formulation. The sample powder consisted of 66.5% lactose, 28.5% microcrystalline cellulose and 5.0% hydroxypropylcellulose. The granules were obtained to agitate 5 g of the sample powder with 1.0 mL of water at 300 rpm for 5 min after pre-powder mixing for 3 min by the USG and the manual hand (HM) methods. The granules were evaluated by the 10% and 90% accumulated particle size and the recoveries of the granules and the powder solid. Median particle size for the USG and the HM methods was 159.2 ± 2.3 and 270.9 ± 14.9 µm, respectively. The USG method had a narrower particle size distribution than those by the HM method. The recovery of the granules by USG was significantly larger than that by the HM method. Characteristics of all of the granules indicated that the USG method could produce higher quality granules within a shorter time than the HM methods.

  18. Grain-size analysis of volcanic ash for the rapid assessment of respiratory health hazard.

    PubMed

    Horwell, Claire J

    2007-10-01

    Volcanic ash has the potential to cause acute and chronic respiratory diseases if the particles are sufficiently fine to enter the respiratory system. Characterization of the grain-size distribution (GSD) of volcanic ash is, therefore, a critical first step in assessing its health hazard. Quantification of health-relevant size fractions is challenging without state-of-the-art technology, such as the laser diffractometer. Here, several methods for GSD characterization for health assessment are considered, the potential for low-cost measurements is investigated and the first database of health-pertinent GSD data is presented for a suite of ash samples from around the world. Methodologies for accurate measurement of the GSD of volcanic ash by laser diffraction are presented by experimental analysis of optimal refractive indices for different magmatic compositions. Techniques for representative sampling of small quantities of ash are also experimentally investigated. GSD results for health-pertinent fractions for a suite of 63 ash samples show that the fraction of respirable (<4 microm) material ranges from 0-17 vol%, with the variation reflecting factors such as the style of the eruption and the distance from the source. A strong correlation between the amount of <4 and <10 microm material is observed for all ash types. This relationship is stable at all distances from the volcano and with all eruption styles and can be applied to volcanic plume and ash fallout models. A weaker relationship between the <4 and <63 microm fractions provides a novel means of estimating the quantity of respirable material from data obtained by sieving.

  19. Monitoring diesel particulate matter and calculating diesel particulate densities using Grimm model 1.109 real-time aerosol monitors in underground mines.

    PubMed

    Kimbal, Kyle C; Pahler, Leon; Larson, Rodney; VanDerslice, Jim

    2012-01-01

    Currently, there is no Mine Safety and Health Administration (MSHA)-approved sampling method that provides real-time results for ambient concentrations of diesel particulates. This study investigated whether a commercially available aerosol spectrometer, the Grimm Portable Aerosol Spectrometer Model 1.109, could be used during underground mine operations to provide accurate real-time diesel particulate data relative to MSHA-approved cassette-based sampling methods. A subset was to estimate size-specific diesel particle densities to potentially improve the diesel particulate concentration estimates using the aerosol monitor. Concurrent sampling was conducted during underground metal mine operations using six duplicate diesel particulate cassettes, according to the MSHA-approved method, and two identical Grimm Model 1.109 instruments. Linear regression was used to develop adjustment factors relating the Grimm results to the average of the cassette results. Statistical models using the Grimm data produced predicted diesel particulate concentrations that highly correlated with the time-weighted average cassette results (R(2) = 0.86, 0.88). Size-specific diesel particulate densities were not constant over the range of particle diameters observed. The variance of the calculated diesel particulate densities by particle diameter size supports the current understanding that diesel emissions are a mixture of particulate aerosols and a complex host of gases and vapors not limited to elemental and organic carbon. Finally, diesel particulate concentrations measured by the Grimm Model 1.109 can be adjusted to provide sufficiently accurate real-time air monitoring data for an underground mining environment.

  20. Three Dimensional Imaging of Paraffin Embedded Human Lung Tissue Samples by Micro-Computed Tomography

    PubMed Central

    Scott, Anna E.; Vasilescu, Dragos M.; Seal, Katherine A. D.; Keyes, Samuel D.; Mavrogordato, Mark N.; Hogg, James C.; Sinclair, Ian; Warner, Jane A.; Hackett, Tillie-Louise; Lackie, Peter M.

    2015-01-01

    Background Understanding the three-dimensional (3-D) micro-architecture of lung tissue can provide insights into the pathology of lung disease. Micro computed tomography (µCT) has previously been used to elucidate lung 3D histology and morphometry in fixed samples that have been stained with contrast agents or air inflated and dried. However, non-destructive microstructural 3D imaging of formalin-fixed paraffin embedded (FFPE) tissues would facilitate retrospective analysis of extensive tissue archives of lung FFPE lung samples with linked clinical data. Methods FFPE human lung tissue samples (n = 4) were scanned using a Nikon metrology µCT scanner. Semi-automatic techniques were used to segment the 3D structure of airways and blood vessels. Airspace size (mean linear intercept, Lm) was measured on µCT images and on matched histological sections from the same FFPE samples imaged by light microscopy to validate µCT imaging. Results The µCT imaging protocol provided contrast between tissue and paraffin in FFPE samples (15mm x 7mm). Resolution (voxel size 6.7 µm) in the reconstructed images was sufficient for semi-automatic image segmentation of airways and blood vessels as well as quantitative airspace analysis. The scans were also used to scout for regions of interest, enabling time-efficient preparation of conventional histological sections. The Lm measurements from µCT images were not significantly different to those from matched histological sections. Conclusion We demonstrated how non-destructive imaging of routinely prepared FFPE samples by laboratory µCT can be used to visualize and assess the 3D morphology of the lung including by morphometric analysis. PMID:26030902

  1. Measuring Submicron-Sized Fractionated Particulate Matter on Aluminum Impactor Disks

    PubMed Central

    Buchholz, Bruce A.; Zermeño, Paula; Hwang, Hyun-Min; Young, Thomas M.; Guilderson, Thomas P.

    2011-01-01

    Sub-micron sized airborne particulate matter (PM) is not collected well on regular quartz or glass fiber filter papers. We used a micro-orifice uniform deposit impactor (MOUDI) to fractionate PM into six size fractions and deposit it on specially designed high purity thin aluminum disks. The MOUDI separated PM into fractions 56–100 nm, 100–180 nm, 180–320 nm, 320–560 nm, 560–1000 nm, and 1000–1800 nm. Since the MOUDI has a low flow rate (30 L/min), it takes several days to collect sufficient carbon on 47 mm foil disks. The small carbon mass (20–200 microgram C) and large aluminum substrate (~25 mg Al) present several challenges to production of graphite targets for accelerator mass spectrometry (AMS) analysis. The Al foil consumes large amounts of oxygen as it is heated and tends to melt into quartz combustion tubes, causing gas leaks. We describe sample processing techniques to reliably produce graphitic targets for 14C-AMS analysis of PM deposited on Al impact foils. PMID:22228915

  2. Brain Tumor Epidemiology: Consensus from the Brain Tumor Epidemiology Consortium (BTEC)

    PubMed Central

    Bondy, Melissa L.; Scheurer, Michael E.; Malmer, Beatrice; Barnholtz-Sloan, Jill S.; Davis, Faith G.; Il’yasova, Dora; Kruchko, Carol; McCarthy, Bridget J.; Rajaraman, Preetha; Schwartzbaum, Judith A.; Sadetzki, Siegal; Schlehofer, Brigitte; Tihan, Tarik; Wiemels, Joseph L.; Wrensch, Margaret; Buffler, Patricia A.

    2010-01-01

    Epidemiologists in the Brain Tumor Epidemiology Consortium (BTEC) have prioritized areas for further research. Although many risk factors have been examined over the past several decades, there are few consistent findings possibly due to small sample sizes in individual studies and differences between studies in subjects, tumor types, and methods of classification. Individual studies have generally lacked sufficient sample size to examine interactions. A major priority based on available evidence and technologies includes expanding research in genetics and molecular epidemiology of brain tumors. BTEC has taken an active role in promoting understudied groups such as pediatric brain tumors, the etiology of rare glioma subtypes, such as oligodendroglioma, and meningioma, which not uncommon, has only recently been systematically registered in the US. There is also a pressing need to bring more researchers, especially junior investigators, to study brain tumor epidemiology. However, relatively poor funding for brain tumor research has made it difficult to encourage careers in this area. We review the group’s consensus on the current state of scientific findings and present a consensus on research priorities to identify the important areas the science should move to address. PMID:18798534

  3. 29 CFR 1450.12 - Collection in installments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... arrangement and which contains a provision accelerating the debt in the event the debtor defaults. The size and frequency of installment payments should bear a reasonable relation to the size of the debt and the debtor's ability to pay. If possible, the installment payments should be sufficient in size and...

  4. 22 CFR 512.12 - Collection in installments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... provision accelerating the debt in the event the debtor defaults. The size and frequency of the payments should bear a reasonable relation to the size of the debt and ability to the debtor to pay. If possible the installment payments should be sufficient in size and frequency to liquidate the Government's...

  5. 14 CFR 1261.411 - Collection in installments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... event the debtor defaults. The size and frequency of installment payments should bear a reasonable relation to the size of the debt and the debtor's ability to pay. If possible, the installment payments should be sufficient in size and frequency to liquidate the Government's claim in not more than 3 years...

  6. 29 CFR 20.33 - Collection in installments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accelerating the debt in the event the debtor defaults. The size and frequency of installment payments should bear a reasonable relation to the size of the debt and the debtor's ability to pay. If possible, the installment payments should be sufficient in size and frequency to liquidate the Government's claim in not...

  7. 46 CFR 190.10-25 - Stairway size.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Stairway size. 190.10-25 Section 190.10-25 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OCEANOGRAPHIC RESEARCH VESSELS CONSTRUCTION AND ARRANGEMENT Means of Escape § 190.10-25 Stairway size. (a) Stairways shall be of sufficient width having in...

  8. Geochemical stratigraphy of two regolith cores from the Central Highlands of the moon

    NASA Technical Reports Server (NTRS)

    Korotev, R. L.

    1991-01-01

    High-resolution concentration profiles are presented for 20-22 chemical elements in the under 1-mm grain-size fractions of 60001-7 and 60009/10. Emphasis is placed on the stratigraphic features of the cores, and the fresh results are compared with those of previous petrographic and geochemical studies. For elements associated with major mineral phases, the variations in concentration in both cores exceed that observed in some 40 samples of surface and trench soils. Most of the variation in lithophile element concentrations at depths of 18 to 21 cm results from the mixing of two components - oil that is relatively mafic and rich in incompatible trace elements (ITEs), and coarse-grained anorthosite. The linearity of mixing lines on two-element concentration plots argues that the relative abundances of these various subcomponents are sufficiently uniform from sample to sample and from region to region in the core that the mixture behaves effectively as a single component. Soils at depths of 52-55 cm exhibit very low concentrations of ITEs.

  9. Elemental analysis of printed circuit boards considering the ROHS regulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wienold, Julia, E-mail: julia.wienold@bam.de; Recknagel, Sebastian, E-mail: sebastian.recknagel@bam.de; Scharf, Holger, E-mail: holger.scharf@bam.de

    2011-03-15

    The EU RoHS Directive (2002/95/EC of the European Parliament and of the Council) bans the placing of new electrical and electronic equipment containing more than agreed levels of lead, cadmium, mercury, hexavalent chromium, polybrominated biphenyl (PBB) and polybrominated diphenyl ether (PBDE) flame retardants on the EU market. It necessitates methods for the evaluation of RoHS compliance of assembled electronic equipment. In this study mounted printed circuit boards from personal computers were analyzed on their content of the three elements Cd, Pb and Hg which were limited by the EU RoHS directive. Main focus of the investigations was the influence ofmore » sample pre-treatment on the precision and reproducibility of the results. The sample preparation steps used were based on the guidelines given in EN 62321. Five different types of dissolution procedures were tested on different subsequent steps of sample treatment like cutting and milling. Elemental analysis was carried out using ICP-OES, XRF and CV-AFS (Hg). The results obtained showed that for decision-making with respect to RoHS compliance a size reduction of the material to be analyzed to particles {<=}1.5 mm can already be sufficient. However, to ensure analytical results with relative standard deviations of less than 20%, as recommended by the EN 62321, a much larger effort for sample processing towards smaller particle sizes might be required which strongly depends on the mass fraction of the element under investigation.« less

  10. Comparing the chlorine disinfection of detached biofilm clusters with those of sessile biofilms and planktonic cells in single- and dual-species cultures.

    PubMed

    Behnke, Sabrina; Parker, Albert E; Woodall, Dawn; Camper, Anne K

    2011-10-01

    Although the detachment of cells from biofilms is of fundamental importance to the dissemination of organisms in both public health and clinical settings, the disinfection efficacies of commonly used biocides on detached biofilm particles have not been investigated. Therefore, the question arises whether cells in detached aggregates can be killed with disinfectant concentrations sufficient to inactivate planktonic cells. Burkholderia cepacia and Pseudomonas aeruginosa were grown in standardized laboratory reactors as single species and in coculture. Cluster size distributions in chemostats and biofilm reactor effluent were measured. Chlorine susceptibility was assessed for planktonic cultures, attached biofilm, and particles and cells detached from the biofilm. Disinfection tolerance generally increased with a higher percentage of larger cell clusters in the chemostat and detached biofilm. Samples with a lower percentage of large clusters were more easily disinfected. Thus, disinfection tolerance depended on the cluster size distribution rather than sample type for chemostat and detached biofilm. Intact biofilms were more tolerant to chlorine independent of species. Homogenization of samples led to significantly increased susceptibility in all biofilm samples as well as detached clusters for single-species B. cepacia, B. cepacia in coculture, and P. aeruginosa in coculture. The disinfection efficacy was also dependent on species composition; coculture was advantageous to the survival of both species when grown as a biofilm or as clusters detached from biofilm but, surprisingly, resulted in a lower disinfection tolerance when they were grown as a mixed planktonic culture.

  11. Development of a model to simulate infection dynamics of Mycobacterium bovis in cattle herds in the United States

    PubMed Central

    Smith, Rebecca L.; Schukken, Ynte H.; Lu, Zhao; Mitchell, Rebecca M.; Grohn, Yrjo T.

    2013-01-01

    Objective To develop a mathematical model to simulate infection dynamics of Mycobacterium bovis in cattle herds in the United States and predict efficacy of the current national control strategy for tuberculosis in cattle. Design Stochastic simulation model. Sample Theoretical cattle herds in the United States. Procedures A model of within-herd M bovis transmission dynamics following introduction of 1 latently infected cow was developed. Frequency- and density-dependent transmission modes and 3 tuberculin-test based culling strategies (no test-based culling, constant (annual) testing with test-based culling, and the current strategy of slaughterhouse detection-based testing and culling) were investigated. Results were evaluated for 3 herd sizes over a 10-year period and validated via simulation of known outbreaks of M bovis infection. Results On the basis of 1,000 simulations (1000 herds each) at replacement rates typical for dairy cattle (0.33/y), median time to detection of M bovis infection in medium-sized herds (276 adult cattle) via slaughterhouse surveillance was 27 months after introduction, and 58% of these herds would spontaneously clear the infection prior to that time. Sixty-two percent of medium-sized herds without intervention and 99% of those managed with constant test-based culling were predicted to clear infection < 10 years after introduction. The model predicted observed outbreaks best for frequency-dependent transmission, and probability of clearance was most sensitive to replacement rate. Conclusions and Clinical Relevance Although modeling indicated the current national control strategy was sufficient for elimination of M bovis infection from dairy herds after detection, slaughterhouse surveillance was not sufficient to detect M bovis infection in all herds and resulted in subjectively delayed detection, compared with the constant testing method. Further research is required to economically optimize this strategy. PMID:23865885

  12. Effectiveness of fishing gears to assess fish assemblage size structure in small lake ecosystems

    Treesearch

    T. A. Clement; K. Pangle; D. G. Uzarski; B. A. Murry

    2014-01-01

    Measurement of fish body-size distributions is increasingly used as a management tool to assess fishery status. However, the effects of gear selection on observed fish size structure has not received sufficient attention. Four different gear types (experimental gill nets, fine mesh bag seine, and two different sized mesh trap nets), which are commonly employed in the...

  13. Reconciling PM10 analyses by different sampling methods for Iron King Mine tailings dust.

    PubMed

    Li, Xu; Félix, Omar I; Gonzales, Patricia; Sáez, Avelino Eduardo; Ela, Wendell P

    2016-03-01

    The overall project objective at the Iron King Mine Superfund site is to determine the level and potential risk associated with heavy metal exposure of the proximate population emanating from the site's tailings pile. To provide sufficient size-fractioned dust for multi-discipline research studies, a dust generator was built and is now being used to generate size-fractioned dust samples for toxicity investigations using in vitro cell culture and animal exposure experiments as well as studies on geochemical characterization and bioassay solubilization with simulated lung and gastric fluid extractants. The objective of this study is to provide a robust method for source identification by comparing the tailing sample produced by dust generator and that collected by MOUDI sampler. As and Pb concentrations of the PM10 fraction in the MOUDI sample were much lower than in tailing samples produced by the dust generator, indicating a dilution of Iron King tailing dust by dust from other sources. For source apportionment purposes, single element concentration method was used based on the assumption that the PM10 fraction comes from a background source plus the Iron King tailing source. The method's conclusion that nearly all arsenic and lead in the PM10 dust fraction originated from the tailings substantiates our previous Pb and Sr isotope study conclusion. As and Pb showed a similar mass fraction from Iron King for all sites suggesting that As and Pb have the same major emission source. Further validation of this simple source apportionment method is needed based on other elements and sites.

  14. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data

    PubMed Central

    Chen, Yi-Hau

    2017-01-01

    Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https://github.com/roqe/T2GA. PMID:28622336

  15. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data.

    PubMed

    Lai, En-Yu; Chen, Yi-Hau; Wu, Kun-Pin

    2017-06-01

    Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https://github.com/roqe/T2GA.

  16. Spatial Variation in Soil Properties among North American Ecosystems and Guidelines for Sampling Designs

    PubMed Central

    Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max

    2014-01-01

    Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. PMID:24465377

  17. A High-Precision Counter Using the DSP Technique

    DTIC Science & Technology

    2004-09-01

    DSP is not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. So we cut the...sampling number in a cycle is not good enough to achieve an accuracy less than 2×10-11. For this reason, a correlation operation is performed for... not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. We will solve this

  18. The role of haptic versus visual volume cues in the size-weight illusion.

    PubMed

    Ellis, R R; Lederman, S J

    1993-03-01

    Three experiments establish the size-weight illusion as a primarily haptic phenomenon, despite its having been more traditionally considered an example of vision influencing haptic processing. Experiment 1 documents, across a broad range of stimulus weights and volumes, the existence of a purely haptic size-weight illusion, equal in strength to the traditional illusion. Experiment 2 demonstrates that haptic volume cues are both sufficient and necessary for a full-strength illusion. In contrast, visual volume cues are merely sufficient, and produce a relatively weaker effect. Experiment 3 establishes that congenitally blind subjects experience an effect as powerful as that of blindfolded sighted observers, thus demonstrating that visual imagery is also unnecessary for a robust size-weight illusion. The results are discussed in terms of their implications for both sensory and cognitive theories of the size-weight illusion. Applications of this work to a human factors design and to sensor-based systems for robotic manipulation are also briefly considered.

  19. Size effects in non-linear heat conduction with flux-limited behaviors

    NASA Astrophysics Data System (ADS)

    Li, Shu-Nan; Cao, Bing-Yang

    2017-11-01

    Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.

  20. Process for preparing a stabilized coal-water slurry

    DOEpatents

    Givens, E.N.; Kang, D.

    1987-06-23

    A process is described for preparing a stabilized coal particle suspension which includes the steps of providing an aqueous media substantially free of coal oxidizing constituents, reducing, in a nonoxidizing atmosphere, the particle size of the coal to be suspended to a size sufficiently small to permit suspension thereof in the aqueous media and admixing the coal of reduced particle size with the aqueous media to release into the aqueous media coal stabilizing constituents indigenous to and carried by the reduced coal particles in order to form a stabilized coal particle suspension. The coal stabilizing constituents are effective in a nonoxidizing atmosphere to maintain the coal particle suspension at essentially a neutral or alkaline pH. The coal is ground in a nonoxidizing atmosphere such as an inert gaseous atmosphere to reduce the coal to a sufficient particle size and is admixed with an aqueous media that has been purged of oxygen and acid-forming gases. 2 figs.

  1. Process for preparing a stabilized coal-water slurry

    DOEpatents

    Givens, Edwin N.; Kang, Doohee

    1987-01-01

    A process for preparing a stabilized coal particle suspension which includes the steps of providing an aqueous media substantially free of coal oxidizing constituents, reducing, in a nonoxidizing atmosphere, the particle size of the coal to be suspended to a size sufficiently small to permit suspension thereof in the aqueous media and admixing the coal of reduced particle size with the aqueous media to release into the aqueous media coal stabilizing constituents indigenous to and carried by the reduced coal particles in order to form a stabilized coal particle suspension. The coal stabilizing constituents are effective in a nonoxidizing atmosphere to maintain the coal particle suspension at essentially a neutral or alkaline pH. The coal is ground in a nonoxidizing atmosphere such as an inert gaseous atmosphere to reduce the coal to a sufficient particle size and is admixed with an aqueous media that has been purged of oxygen and acid-forming gases.

  2. Two-sample binary phase 2 trials with low type I error and low sample size

    PubMed Central

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A.

    2017-01-01

    Summary We address design of two-stage clinical trials comparing experimental and control patients. Our end-point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p0 and alternative that it is p0 among controls and p1 > p0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E ≥ m, with two-sample rules of the form E – C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. PMID:28118686

  3. Barostat testing of rectal sensation and compliance in humans: comparison of results across two centres and overall reproducibility.

    PubMed

    Cremonini, F; Houghton, L A; Camilleri, M; Ferber, I; Fell, C; Cox, V; Castillo, E J; Alpers, D H; Dewit, O E; Gray, E; Lea, R; Zinsmeister, A R; Whorwell, P J

    2005-12-01

    We assessed reproducibility of measurements of rectal compliance and sensation in health in studies conducted at two centres. We estimated samples size necessary to show clinically meaningful changes in future studies. We performed rectal barostat tests three times (day 1, day 1 after 4 h and 14-17 days later) in 34 healthy participants. We measured compliance and pressure thresholds for first sensation, urgency, discomfort and pain using ascending method of limits and symptom ratings for gas, urgency, discomfort and pain during four phasic distensions (12, 24, 36 and 48 mmHg) in random order. Results obtained at the two centres differed minimally. Reproducibility of sensory end points varies with type of sensation, pressure level and method of distension. Pressure threshold for pain and sensory ratings for non-painful sensations at 36 and 48 mmHg distension were most reproducible in the two centres. Sample size calculations suggested that crossover design is preferable in therapeutic trials: for each dose of medication tested, a sample of 21 should be sufficient to demonstrate 30% changes in all sensory thresholds and almost all sensory ratings. We conclude that reproducibility varies with sensation type, pressure level and distension method, but in a two-centre study, differences in observed results of sensation are minimal and pressure threshold for pain and sensory ratings at 36-48 mmHg of distension are reproducible.

  4. A simple and compact mechanical velocity selector of use to analyze/select molecular alignment in supersonic seeded beams

    NASA Astrophysics Data System (ADS)

    Pirani, F.; Cappelletti, D.; Vecchiocattivi, F.; Vattuone, L.; Gerbi, A.; Rocca, M.; Valbusa, U.

    2004-02-01

    A light and compact mechanical velocity selector, of novel design, for applications in supersonic molecular-beam studies has been developed. It represents a simplified version of the traditional, 50 year old, slotted disks velocity selector. Taking advantage of new materials and improved machining techniques, the new version has been realized with only two rotating slotted disks, driven by an electrical motor with adjustable frequency of rotation, and thus has a much smaller weight and size with respect to the original design, which may allow easier implementation in most of the available molecular-beam apparatuses. This new type of selector, which maintains a sufficiently high velocity resolution, has been developed for sampling molecules with different degrees of rotational alignment, like those emerging from a seeded supersonic expansion. This sampling is the crucial step to realize new molecular-beam experiments to study the effect of molecular alignment in collisional processes.

  5. Confocal multispot microscope for fast and deep imaging in semicleared tissues

    NASA Astrophysics Data System (ADS)

    Adam, Marie-Pierre; Müllenbroich, Marie Caroline; Di Giovanna, Antonino Paolo; Alfieri, Domenico; Silvestri, Ludovico; Sacconi, Leonardo; Pavone, Francesco Saverio

    2018-02-01

    Although perfectly transparent specimens are imaged faster with light-sheet microscopy, less transparent samples are often imaged with two-photon microscopy leveraging its robustness to scattering; however, at the price of increased acquisition times. Clearing methods that are capable of rendering strongly scattering samples such as brain tissue perfectly transparent specimens are often complex, costly, and time intensive, even though for many applications a slightly lower level of tissue transparency is sufficient and easily achieved with simpler and faster methods. Here, we present a microscope type that has been geared toward the imaging of semicleared tissue by combining multispot two-photon excitation with rolling shutter wide-field detection to image deep and fast inside semicleared mouse brain. We present a theoretical and experimental evaluation of the point spread function and contrast as a function of shutter size. Finally, we demonstrate microscope performance in fixed brain slices by imaging dendritic spines up to 400-μm deep.

  6. a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.

    2018-04-01

    Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.

  7. Inferring Biological Structures from Super-Resolution Single Molecule Images Using Generative Models

    PubMed Central

    Maji, Suvrajit; Bruchez, Marcel P.

    2012-01-01

    Localization-based super resolution imaging is presently limited by sampling requirements for dynamic measurements of biological structures. Generating an image requires serial acquisition of individual molecular positions at sufficient density to define a biological structure, increasing the acquisition time. Efficient analysis of biological structures from sparse localization data could substantially improve the dynamic imaging capabilities of these methods. Using a feature extraction technique called the Hough Transform simple biological structures are identified from both simulated and real localization data. We demonstrate that these generative models can efficiently infer biological structures in the data from far fewer localizations than are required for complete spatial sampling. Analysis at partial data densities revealed efficient recovery of clathrin vesicle size distributions and microtubule orientation angles with as little as 10% of the localization data. This approach significantly increases the temporal resolution for dynamic imaging and provides quantitatively useful biological information. PMID:22629348

  8. Intercomparison of fog water samplers

    NASA Astrophysics Data System (ADS)

    Schell, Dieter; Georgii, Hans-Walter; Maser, Rolf; Jaeschke, Wolfgang; Arends, Beate G.; Kos, Gerard P. A.; Winkler, Peter; Schneider, Thomas; Berner, Axel; Kruisz, Christian

    1992-11-01

    During the Po Valley Fog Experiment 1989, two fogwater collectors were operated simultaneously at the ground and the results were compared to each other. The chemical analyses of the samples as well as the collection efficiencies showed remarkable differences between both collectors. Some differences in the solute concentrations in the samples of both collectors could be expected due to small differences in the 50-percent cut-off diameters. The large differences in the collection efficiencies however cannot be explained by these small variations of d sub 50, because normally only a small fraction of the water mass is concentrated in the size range of 5-7-micron droplets. It is shown that it is not sufficient to characterize a fogwater collector only by its cut-off diameter. The results of several wind tunnel calibration tests show that the collection efficiencies of the fogwater collectors are a function of windspeed and shape of the droplet spectra.

  9. 40 CFR 13.18 - Installment payments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accelerating the debt in the event of default. The size and frequency of installment payments will bear a reasonable relation to the size of the debt and the debtor's ability to pay. The installment payments will be sufficient in size and frequency to liquidate the debt in not more than 3 years, unless the Administrator...

  10. 22 CFR 213.19 - Installment payments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... a provision accelerating the debt in the event of default. The size and frequency of installment payments will bear a reasonable relation to the size of the debt and the debtor's ability to pay. The installment payments will be sufficient in size and frequency to liquidate the debt in not more than 3 years...

  11. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    NASA Astrophysics Data System (ADS)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.

  12. Constructing a Watts-Strogatz network from a small-world network with symmetric degree distribution.

    PubMed

    Menezes, Mozart B C; Kim, Seokjin; Huang, Rongbing

    2017-01-01

    Though the small-world phenomenon is widespread in many real networks, it is still challenging to replicate a large network at the full scale for further study on its structure and dynamics when sufficient data are not readily available. We propose a method to construct a Watts-Strogatz network using a sample from a small-world network with symmetric degree distribution. Our method yields an estimated degree distribution which fits closely with that of a Watts-Strogatz network and leads into accurate estimates of network metrics such as clustering coefficient and degree of separation. We observe that the accuracy of our method increases as network size increases.

  13. Vocational students' learning preferences: the interpretability of ipsative data.

    PubMed

    Smith, P J

    2000-02-01

    A number of researchers have argued that ipsative data are not suitable for statistical procedures designed for normative data. Others have argued that the interpretability of such analyses of ipsative data are little affected where the number of variables and the sample size are sufficiently large. The research reported here represents a factor analysis of the scores on the Canfield Learning Styles Inventory for 1,252 students in vocational education. The results of the factor analysis of these ipsative data were examined in a context of existing theory and research on vocational students and lend support to the argument that the factor analysis of ipsative data can provide sensibly interpretable results.

  14. Photoinduced nucleation: a novel tool for detecting molecules in air at ultra-low concentrations

    DOEpatents

    Katz, Joseph L.; Lihavainen, Heikki; Rudek, Markus M.; Salter, Brian C.

    2002-01-01

    A method and apparatus for determining the presence of molecules in a gas at concentrations of less than about 100 ppb. Light having wavelengths in the range from about 200 nm to about 350 nm is used to illuminate a flowing sample of the gas causing the molecules if present to form clusters. A mixture of the illuminated gas and a vapor is cooled until the vapor is supersaturated so that there is a small rate of homogeneous nucleation. The supersaturated vapor condenses on the clusters thus causing the clusters to grow to a size sufficient to be counted by light scattering and then the clusters are counted.

  15. Is There a Maximum Size of Water Drops in Nature?

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2013-01-01

    In nature, water drops can have a large variety of sizes and shapes. Small droplets with diameters of the order of 5 to 10 µm are present in fog and clouds. This is not sufficiently large for gravity to dominate their behavior. In contrast, raindrops typically have sizes of the order of 1 mm, with observed maximum sizes in nature of around 5 mm in…

  16. Results of Large-Scale Spacecraft Flammability Tests

    NASA Technical Reports Server (NTRS)

    Ferkul, Paul; Olson, Sandra; Urban, David L.; Ruff, Gary A.; Easton, John; T'ien, James S.; Liao, Ta-Ting T.; Fernandez-Pello, A. Carlos; Torero, Jose L.; Eigenbrand, Christian; hide

    2017-01-01

    For the first time, a large-scale fire was intentionally set inside a spacecraft while in orbit. Testing in low gravity aboard spacecraft had been limited to samples of modest size: for thin fuels the longest samples burned were around 15 cm in length and thick fuel samples have been even smaller. This is despite the fact that fire is a catastrophic hazard for spaceflight and the spread and growth of a fire, combined with its interactions with the vehicle cannot be expected to scale linearly. While every type of occupied structure on earth has been the subject of full scale fire testing, this had never been attempted in space owing to the complexity, cost, risk and absence of a safe location. Thus, there is a gap in knowledge of fire behavior in spacecraft. The recent utilization of large, unmanned, resupply craft has provided the needed capability: a habitable but unoccupied spacecraft in low earth orbit. One such vehicle was used to study the flame spread over a 94 x 40.6 cm thin charring solid (fiberglasscotton fabric). The sample was an order of magnitude larger than anything studied to date in microgravity and was of sufficient scale that it consumed 1.5 of the available oxygen. The experiment which is called Saffire consisted of two tests, forward or concurrent flame spread (with the direction of flow) and opposed flame spread (against the direction of flow). The average forced air speed was 20 cms. For the concurrent flame spread test, the flame size remained constrained after the ignition transient, which is not the case in 1-g. These results were qualitatively different from those on earth where an upward-spreading flame on a sample of this size accelerates and grows. In addition, a curious effect of the chamber size is noted. Compared to previous microgravity work in smaller tunnels, the flame in the larger tunnel spread more slowly, even for a wider sample. This is attributed to the effect of flow acceleration in the smaller tunnels as a result of hot gas expansion. These results clearly demonstrate the unique features of purely forced flow in microgravity on flame spread, the dependence of flame behavior on the scale of the experiment, and the importance of full-scale testing for spacecraft fire safety.

  17. Chemical quality and regulatory compliance of drinking water in Iceland.

    PubMed

    Gunnarsdottir, Maria J; Gardarsson, Sigurdur M; Jonsson, Gunnar St; Bartram, Jamie

    2016-11-01

    Assuring sufficient quality of drinking water is of great importance for public wellbeing and prosperity. Nations have developed regulatory system with the aim of providing drinking water of sufficient quality and to minimize the risk of contamination of the water supply in the first place. In this study the chemical quality of Icelandic drinking water was evaluated by systematically analyzing results from audit monitoring where 53 parameters were assessed for 345 samples from 79 aquifers, serving 74 water supply systems. Compliance to the Icelandic Drinking Water Regulation (IDWR) was evaluated with regard to parametric values, minimum requirement of sampling, and limit of detection. Water quality compliance was divided according to health-related chemicals and indicators, and analyzed according to size. Samples from few individual locations were benchmarked against natural background levels (NBLs) in order to identify potential pollution sources. The results show that drinking compliance was 99.97% in health-related chemicals and 99.44% in indicator parameters indicating that Icelandic groundwater abstracted for drinking water supply is generally of high quality with no expected health risks. In 10 water supply systems, of the 74 tested, there was an indication of anthropogenic chemical pollution, either at the source or in the network, and in another 6 water supplies there was a need to improve the water intake to prevent surface water intrusion. Benchmarking against the NBLs proved to be useful in tracing potential pollution sources, providing a useful tool for identifying pollution at an early stage. Copyright © 2016 Elsevier GmbH. All rights reserved.

  18. Rethinking non-inferiority: a practical trial design for optimising treatment duration.

    PubMed

    Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb

    2018-06-01

    Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.

  19. Trophic interactions of common elasmobranchs in deep-sea communities of the Gulf of Mexico revealed through stable isotope and stomach content analysis

    NASA Astrophysics Data System (ADS)

    Churchill, Diana A.; Heithaus, Michael R.; Vaudo, Jeremy J.; Grubbs, R. Dean; Gastrich, Kirk; Castro, José I.

    2015-05-01

    Deep-water sharks are abundant and widely distributed in the northern and eastern Gulf of Mexico. As mid- and upper-level consumers that can range widely, sharks likely are important components of deep-sea communities and their trophic interactions may serve as system-wide baselines that could be used to monitor the overall health of these communities. We investigated the trophic interactions of deep-sea sharks using a combination of stable isotope (δ13C and δ15N) and stomach content analyses. Two hundred thirty-two muscle samples were collected from elasmobranchs captured off the bottom at depths between 200 and 1100 m along the northern slope (NGS) and the west Florida slope (WFS) of the Gulf of Mexico during 2011 and 2012. Although we detected some spatial, temporal, and interspecific variation in apparent trophic positions based on stable isotopes, there was considerable isotopic overlap among species, between locations, and through time. Overall δ15N values in the NGS region were higher than in the WFS. The δ15N values also increased between April 2011 and 2012 in the NGS, but not the WFS, within Squalus cf. mitsukurii. We found that stable isotope values of S. cf. mitsukurii, the most commonly captured elasmobranch, varied between sample regions, through time, and also with sex and size. Stomach content analysis (n=105) suggested relatively similar diets at the level of broad taxonomic categories of prey among the taxa with sufficient sample sizes. We did not detect a relationship between body size and relative trophic levels inferred from δ15N, but patterns within several species suggest increasing trophic levels with increasing size. Both δ13C and δ15N values suggest a substantial degree of overlap among most deep-water shark species. This study provides the first characterization of the trophic interactions of deep-sea sharks in the Gulf of Mexico and establishes system baselines for future investigations.

  20. SU-G-TeP3-14: Three-Dimensional Cluster Model in Inhomogeneous Dose Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, J; Penagaricano, J; Narayanasamy, G

    2016-06-15

    Purpose: We aim to investigate 3D cluster formation in inhomogeneous dose distribution to search for new models predicting radiation tissue damage and further leading to new optimization paradigm for radiotherapy planning. Methods: The aggregation of higher dose in the organ at risk (OAR) than a preset threshold was chosen as the cluster whose connectivity dictates the cluster structure. Upon the selection of the dose threshold, the fractional density defined as the fraction of voxels in the organ eligible to be part of the cluster was determined according to the dose volume histogram (DVH). A Monte Carlo method was implemented tomore » establish a case pertinent to the corresponding DVH. Ones and zeros were randomly assigned to each OAR voxel with the sampling probability equal to the fractional density. Ten thousand samples were randomly generated to ensure a sufficient number of cluster sets. A recursive cluster searching algorithm was developed to analyze the cluster with various connectivity choices like 1-, 2-, and 3-connectivity. The mean size of the largest cluster (MSLC) from the Monte Carlo samples was taken to be a function of the fractional density. Various OARs from clinical plans were included in the study. Results: Intensive Monte Carlo study demonstrates the inverse relationship between the MSLC and the cluster connectivity as anticipated and the cluster size does not change with fractional density linearly regardless of the connectivity types. An initially-slow-increase to exponential growth transition of the MSLC from low to high density was observed. The cluster sizes were found to vary within a large range and are relatively independent of the OARs. Conclusion: The Monte Carlo study revealed that the cluster size could serve as a suitable index of the tissue damage (percolation cluster) and the clinical outcome of the same DVH might be potentially different.« less

  1. Influence of size and shape of sub-micrometer light scattering centers in ZnO-assisted TiO2 photoanode for dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Pham, Trang T. T.; Mathews, Nripan; Lam, Yeng-Ming; Mhaisalkar, Subodh

    2018-03-01

    Sub-micrometer cavities have been incorporated in the TiO2 photoanode of dye-sensitized solar cell to enhance its optical property with light scattering effect. These are large pores of several hundred nanometers in size and scatter incident light due to the difference refraction index between the scattering center and the surrounding materials, according to Mie theory. The pores are created using polystyrene (PS) or zinc oxide (ZnO) templates reported previously which resulted in ellipsoidal and spherical shapes, respectively. The effect of size and shape of scattering center was modeled using a numerical analysis finite-difference time-domain (FDTD). The scattering cross-section was not affected significantly with different shapes if the total displacement volume of the scattering center is comparable. Experiments were carried out to evaluate the optical property with varying size of ZnO templates. Photovoltaic effect of dye-sensitized solar cells made from these ZnO-assisted films were investigated with incident-photon-to-current efficiency to understand the effect of scattering center size on the enhancement of absorption. With 380 nm macropores incorporated, the power conversion efficiency has increased by 11% mostly thanks to the improved current density, while 170 nm and 500 nm macropores samples did not have increment in sufficiently wide range of absorbing wavelengths.

  2. Size segregation of component coals during pulverization of high volatile/low volatile blends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.; Orban, P.C.

    1995-12-31

    Samples of single high volatile (hvb) and low volatile (lvb) coals and binary blends in proportions ranging from 75%hvb/25%lvb to 25%hvb/75%lvb were pulverized in a Raymond 271 bowl mill and then screened into different size fractions. The ranks of two of the feed coals were sufficiently different that individual particles could be distinguished microscopically. This enabled the proportions of each feed coal in the various blend size fractions to be determined. The difference in rank and therefore grindability of the components (Hardgrove indices of 99 versus 50) was such that significant segregation resulted. For example, the 25%hvb/75%lvb blend, upon grinding,more » produced a +50 mesh (300 {micro}m) fraction with 30% lvb coal, and a {minus}325 mesh (45 {micro}m) fraction with 84% lvb coal. The effect of this segregation according to size was a notable progressive decrease in volatility towards the finer fractions, consistent with an increase in the proportion of lvb particles; differences in volatile matter (d.b.) between coarsest and finest fractions of up to 6.9% were encountered. Although most of the segregation is attributable to rank difference between the component coals, part appears to be due to the lower grindability of liptinite-rich lithotypes in the hvb coal.« less

  3. Determination of permeability of ultra-fine cupric oxide aerosol through military filters and protective filters

    NASA Astrophysics Data System (ADS)

    Kellnerová, E.; Večeřa, Z.; Kellner, J.; Zeman, T.; Navrátil, J.

    2018-03-01

    The paper evaluates the filtration and sorption efficiency of selected types of military combined filters and protective filters. The testing was carried out with the use of ultra-fine aerosol containing cupric oxide nanoparticles ranging in size from 7.6 nm to 299.6 nm. The measurements of nanoparticles were carried out using a scanning mobility particle sizer before and after the passage through the filter and a developed sampling device at the level of particle number concentration approximately 750000 particles·cm-3. The basic parameters of permeability of ultra-fine aerosol passing through the tested material were evaluated, in particular particle size, efficiency of nanoparticle capture by filter, permeability coefficient and overall filtration efficiency. Results indicate that the military filter and particle filters exhibited the highest aerosol permeability especially in the nanoparticle size range between 100–200 nm, while the MOF filters had the highest permeability in the range of 200 to 300 nm. The Filter Nuclear and the Health and Safety filter had 100% nanoparticle capture efficiency and were therefore the most effective. The obtained measurement results have shown that the filtration efficiency over the entire measured range of nanoparticles was sufficient; however, it was different for particular particle sizes.

  4. A meta-analysis of the published literature on the effectiveness of antimicrobial soaps.

    PubMed

    Montville, Rebecca; Schaffner, Donald W

    2011-11-01

    The goal of this research was to conduct a systematic quantitative analysis of the existing data in the literature in order to determine if there is a difference between antimicrobial and nonantimicrobial soaps and to identify the methodological factors that might affect this difference. Data on hand washing efficacy and experimental conditions (sample size, wash duration, soap quantity, challenge organism, inoculum size, and neutralization method) from published studies were compiled and transferred to a relational database. A total of 25 publications, containing 374 observations, met the study selection criteria. The majority of the studies included fewer than 15 observations with each treatment and included a direct comparison between nonantimicrobial soap and antimicrobial soap. Although differences in efficacy between antimicrobial and nonantimicrobial soap were small (∼0.5-log CFU reduction difference), antimicrobial soap produced consistently statistically significantly greater reductions. This difference was true for any of the antimicrobial compounds investigated where n was >20 (chlorhexidine gluconate, iodophor, triclosan, or povidone). Average log reductions were statistically significantly greater (∼2 log CFU) when either gram-positive or gram-negative transient organisms were deliberately added to hands compared with experiments done with resident hand flora (∼0.5 log CFU). Our findings support the importance of using a high initial inoculum on the hands, well above the detection limit. The inherent variability in hand washing seen in the published literature underscores the importance of using a sufficiently large sample size to detect differences when they occur.

  5. Alluvial substrate mapping by automated texture segmentation of recreational-grade side scan sonar imagery.

    PubMed

    Hamill, Daniel; Buscombe, Daniel; Wheaton, Joseph M

    2018-01-01

    Side scan sonar in low-cost 'fishfinder' systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar.

  6. Design of a Phase III cluster randomized trial to assess the efficacy and safety of a malaria transmission blocking vaccine.

    PubMed

    Delrieu, Isabelle; Leboulleux, Didier; Ivinson, Karen; Gessner, Bradford D

    2015-03-24

    Vaccines interrupting Plasmodium falciparum malaria transmission targeting sexual, sporogonic, or mosquito-stage antigens (SSM-VIMT) are currently under development to reduce malaria transmission. An international group of malaria experts was established to evaluate the feasibility and optimal design of a Phase III cluster randomized trial (CRT) that could support regulatory review and approval of an SSM-VIMT. The consensus design is a CRT with a sentinel population randomly selected from defined inner and buffer zones in each cluster, a cluster size sufficient to assess true vaccine efficacy in the inner zone, and inclusion of ongoing assessment of vaccine impact stratified by distance of residence from the cluster edge. Trials should be conducted first in areas of moderate transmission, where SSM-VIMT impact should be greatest. Sample size estimates suggest that such a trial is feasible, and within the range of previously supported trials of malaria interventions, although substantial issues to implementation exist. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A systematic review and meta-analysis of cognitive bias to food stimuli in people with disordered eating behaviour.

    PubMed

    Brooks, Samantha; Prince, Alexis; Stahl, Daniel; Campbell, Iain C; Treasure, Janet

    2011-02-01

    Maladaptive cognitions about food, weight and shape bias attention, memory and judgment and may be linked to disordered eating behaviour. This paper reviews information processing of food stimuli (words, pictures) in people with eating disorders (ED). PubMed, Ovid, ScienceDirect, PsychInfo, Web of Science, Cochrane Library and Google Scholar were searched to December 2009. 63 studies measured attention, memory and judgment bias towards food stimuli in women with ED. Stroop tasks had sufficient sample size for a meta-analyses and effects ranged from small to medium. Other studies of attention bias had variable effects (e.g. the Dot-Probe task, distracter tasks and Startle Eyeblink Modulation). A meta-analysis of memory bias studies in ED and RE yielded insignificant effect. Effect sizes for judgment bias ranged from negligible to large. People with ED have greater attentional bias to food stimuli than healthy controls (HC). Evidence for a memory and judgment bias in ED is limited. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Preparation of fine single crystals of magnetic superconductor RuSr2GdCu2O8-δ by partial melting

    NASA Astrophysics Data System (ADS)

    Yamaki, Kazuhiro; Bamba, Yoshihiro; Irie, Akinobu

    2018-03-01

    In this study, fine uniform RuSr2GdCu2O8-δ (RuGd-1212) single crystals have been successfully prepared by partial melting. Synthesis temperature could be lowered to a value not exceeding the decomposition temperature of RuGd-1212 using the Sr-Gd-Cu-O flux. The crystals grown by alumina boats are cubic, which coincides with the result of a previous study of RuGd-1212 single crystals using platinum crucibles. The single crystals were up to 15 × 15 × 15 µm3 in size and their lattice constants were consistent with those of polycrystalline samples reported previously. Although the present size of single crystals is not sufficient for measurements, the partial melting technique will be beneficial for future progress of research using RuGd-1212 single crystals. Appropriate nominal composition, sintering atmosphere, and temperature are essential factors for growing RuGd-1212 single crystals.

  9. Recovery of glass from the inert fraction refused by MBT plants in a pilot plant.

    PubMed

    Dias, Nilmara; Garrinhas, Inés; Maximo, Angela; Belo, Nuno; Roque, Paulo; Carvalho, M Teresa

    2015-12-01

    Selective collection is a common practice in many countries. However, even in some of those countries there are recyclable materials, like packaging glass, erroneously deposited in the Mixed Municipal Solid Waste (MMSW). In the present paper, a solution is proposed to recover glass from the inert reject of Mechanical and Biological Treatment (MBT) plants treating MMSW aiming at its recycling. The inert reject of MBT (MBTr) plants is characterized by its small particle size and high heterogeneity. The study was made with three real samples of diverse characteristics superimposed mainly by the different upstream MBT. One of the samples (VN) had a high content in organics (approximately 50%) and a particle size smaller than 16 mm. The other two were coarser and exhibited similar particle size distribution but one (RE) was rich in glass (almost 70%) while the other (SD) contained about 40% in glass. A flowsheet was developed integrating drying, to eliminate moisture related with organic matter contamination; magnetic separation, to separate remaining small ferrous particles; vacuum suction, to eliminate light materials; screening, to eliminate the finer fraction that has a insignificant content in glass, and to classify the >6mm fraction in 6-16 mm and >16 mm fractions to be processed separately; separation by particle shape, in the RecGlass equipment specifically designed to eliminate stones; and optical sorting, to eliminate opaque materials. A pilot plant was built and the tests were conducted with the three samples separately. With all samples, it was possible to attain approximately 99% content in glass in the glass products, but the recovery of glass was related with the feed particle size. The finer the feed was, the lower the percentage of glass recovered in the glass product. The results show that each one of the separation processes was needed for product enrichment. The organic matter recovered in the glass product was high, ranging from 0.76% to 1.13%, showing that drying was not sufficient in the tests but that it is a key process for the success of the operation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A Mo-anode-based in-house source for small-angle X-ray scattering measurements of biological macromolecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruetzel, Linda K.; Fischer, Stefan; Salditt, Annalena

    2016-02-15

    We demonstrate the use of a molybdenum-anode-based in-house small-angle X-ray scattering (SAXS) setup to study biological macromolecules in solution. Our system consists of a microfocus X-ray tube delivering a highly collimated flux of 2.5 × 10{sup 6} photons/s at a beam size of 1.2 × 1.2 mm{sup 2} at the collimation path exit and a maximum beam divergence of 0.16 mrad. The resulting observable scattering vectors q are in the range of 0.38 Å{sup −1} down to 0.009 Å{sup −1} in SAXS configuration and of 0.26 Å{sup −1} up to 5.7 Å{sup −1} in wide-angle X-ray scattering (WAXS) mode. Tomore » determine the capabilities of the instrument, we collected SAXS data on weakly scattering biological macromolecules including proteins and a nucleic acid sample with molecular weights varying from ∼12 to 69 kDa and concentrations of 1.5–24 mg/ml. The measured scattering data display a high signal-to-noise ratio up to q-values of ∼0.2 Å{sup −1} allowing for an accurate structural characterization of the samples. Moreover, the in-house source data are of sufficient quality to perform ab initio 3D structure reconstructions that are in excellent agreement with the available crystallographic structures. In addition, measurements for the detergent decyl-maltoside show that the setup can be used to determine the size, shape, and interactions (as characterized by the second virial coefficient) of detergent micelles. This demonstrates that the use of a Mo-anode based in-house source is sufficient to determine basic geometric parameters and 3D shapes of biomolecules and presents a viable alternative to valuable beam time at third generation synchrotron sources.« less

  11. The Effects of Age and Set Size on the Fast Extraction of Egocentric Distance

    PubMed Central

    Gajewski, Daniel A.; Wallin, Courtney P.; Philbeck, John W.

    2016-01-01

    Angular direction is a source of information about the distance to floor-level objects that can be extracted from brief glimpses (near one's threshold for detection). Age and set size are two factors known to impact the viewing time needed to directionally localize an object, and these were posited to similarly govern the extraction of distance. The question here was whether viewing durations sufficient to support object detection (controlled for age and set size) would also be sufficient to support well-constrained judgments of distance. Regardless of viewing duration, distance judgments were more accurate (less biased towards underestimation) when multiple potential targets were presented, suggesting that the relative angular declinations between the objects are an additional source of useful information. Distance judgments were more precise with additional viewing time, but the benefit did not depend on set size and accuracy did not improve with longer viewing durations. The overall pattern suggests that distance can be efficiently derived from direction for floor-level objects. Controlling for age-related differences in the viewing time needed to support detection was sufficient to support distal localization but only when brief and longer glimpse trials were interspersed. Information extracted from longer glimpse trials presumably supported performance on subsequent trials when viewing time was more limited. This outcome suggests a particularly important role for prior visual experience in distance judgments for older observers. PMID:27398065

  12. One-chip biosensor for simultaneous disease marker/calibration substance measurement in human urine by electrochemical surface plasmon resonance method.

    PubMed

    Nakamoto, Kohei; Kurita, Ryoji; Niwa, Osamu

    2010-12-15

    We have developed a miniaturized electrochemical surface plasmon resonance biosensor for measuring two biomolecules that have very different molecular sizes, one is transferrin (MW=75 kDa) as a disease marker protein, the other is creatinine (MW=113) as a calibration marker for the accurate measurement of human urinary samples. The sensor has a PDMS based microchannel that is 2 mm wide and 20 μm deep. Two gold films were integrated in the microchannel; one was modified with anti-transferrin antibody for immuno-reaction, and the other was modified with osmium-poly-vinylpyridine wired horseradish peroxidase (Os-gel-HRP). We further immobilized a tri-enzyme layer of creatininase, creatinase and sarcosine oxidase in order to measure creatinine by converting it to hydrogen peroxide in the upstream channel. We measured the transferrin concentration from the refractive index change involved in an immuno-complex formation, and we were simultaneously able to measure creatinine by employing the refractive index change in the Os-gel-HRP caused by oxidation with the hydrogen peroxide produced from creatinine by the tri-enzyme. The effects of ascorbic acid and uric acid in urine samples were sufficiently eliminated by adding ascorbate oxidase and uricase to the urine samples during sampling. We were able to measure two analyte concentrations within 15 min by one simple injection of 50 μL of diluted human urine into our sensor. The detectable transferrin and creatinine ranges were 20 ng/mL to 10 μg/mL, and 10 μM to 10 mM, respectively, which are sufficient levels for clinical tests. Finally, we compared the results obtained using our sensor with those obtained with a conventional immunoassay and the Jaffe method. We obtained a similar trend that can reduce the fluctuation in the urinary transferrin concentration from three different samples by calibrating the creatinine concentration. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  14. Enabling two-dimensional fourier transform electronic spectroscopy on quantum dots

    NASA Astrophysics Data System (ADS)

    Hill, Robert John, Jr.

    Colloidal semiconductor nanocrystals exhibit unique properties not seen in their bulk counterparts. Quantum confinement of carriers causes a size-tunable bandgap, making them attractive candidates for solar cells. Fundamental understanding of their spectra and carrier dynamics is obscured by inhomogeneous broadening arising from the size distribution. Because quantum dots have long excited state lifetimes and are sensitive to both air and moisture, there are many potential artifacts in femtosecond experiments. Two-dimensional electronic spectroscopy promises insight into the photo-physics, but required key instrumental advances. Optics that can process a broad bandwidth without distortion are required for a two-dimensional optical spectrometer. To control pathlength differences for femtosecond time delays, hollow retro-reflectors are used on actively stabilized delay lines in interferometers. The fabrication of rigid, lightweight, precision hollow rooftop retroreflectors that allow beams to be stacked while preserving polarization is described. The rigidity and low mass enable active stabilization of an interferometer to within 0.6 nm rms displacement, while the return beam deviation is sufficient for Fourier transform spectroscopy with a frequency precision of better than 1 cm -1. Keeping samples oxygen and moisture free while providing fresh sample between laser shots is challenging in an interferometer. A low-vibration spinning sample cell was designed and built to keep samples oxygen free for days while allowing active stabilization of interferometer displacement to ˜1 nm. Combining these technologies has enabled 2D short-wave infrared spectroscopy on colloidal PbSe nanocrystals. 2D spectra demonstrate the advantages of this key instrumentation while providing valuable insight into the low-lying electronic states of colloidal quantum dots.

  15. HPSEC reveals ubiquitous components in fluorescent dissolved organic matter across aquatic ecosystems

    NASA Astrophysics Data System (ADS)

    Wünsch, Urban; Murphy, Kathleen; Stedmon, Colin

    2017-04-01

    Absorbance and fluorescence spectroscopy are efficient tools for tracing the supply, turnover and fate of dissolved organic matter (DOM). The fluorescent fraction of DOM (FDOM) can be characterized by measuring excitation-emission matrices and decomposing the combined fluorescence signal into independent underlying fraction using Parallel Factor Analysis (PARAFAC). Comparisons between studies, facilitated by the OpenFluor database, reveal highly similar components across different aquatic systems and between studies. To obtain PARAFAC models in sufficient quality, scientists traditionally rely on analyzing dozens to hundreds of samples spanning environmental gradients. A cross-validation of this approach using different analytical tools has not yet been accomplished. In this study, we applied high-performance size-exclusion chromatography (HPSEC) to characterize the size-dependent optical properties of dissolved organic matter of samples from contrasting aquatic environments with online absorbance and fluorescence detectors. Each sample produced hundreds of absorbance spectra of colored DOM (CDOM) and hundreds of matrices of FDOM intensities. This approach facilitated the detailed study of CDOM spectral slopes and further allowed the reliable implementation of PARAFAC on individual samples. This revealed a high degree of overlap in the spectral properties of components identified from different sites. Moreover, many of the model components showed significant spectral congruence with spectra in the OpenFluor database. Our results provide evidence of the presence of ubiquitous FDOM components and additionally provide further evidence for the supramolecular assembly hypothesis. They demonstrate the potential for HPSEC to provide a wealth of new insights into the relationship between optical and chemical properties of DOM.

  16. Comparing the Chlorine Disinfection of Detached Biofilm Clusters with Those of Sessile Biofilms and Planktonic Cells in Single- and Dual-Species Cultures ▿ †

    PubMed Central

    Behnke, Sabrina; Parker, Albert E.; Woodall, Dawn; Camper, Anne K.

    2011-01-01

    Although the detachment of cells from biofilms is of fundamental importance to the dissemination of organisms in both public health and clinical settings, the disinfection efficacies of commonly used biocides on detached biofilm particles have not been investigated. Therefore, the question arises whether cells in detached aggregates can be killed with disinfectant concentrations sufficient to inactivate planktonic cells. Burkholderia cepacia and Pseudomonas aeruginosa were grown in standardized laboratory reactors as single species and in coculture. Cluster size distributions in chemostats and biofilm reactor effluent were measured. Chlorine susceptibility was assessed for planktonic cultures, attached biofilm, and particles and cells detached from the biofilm. Disinfection tolerance generally increased with a higher percentage of larger cell clusters in the chemostat and detached biofilm. Samples with a lower percentage of large clusters were more easily disinfected. Thus, disinfection tolerance depended on the cluster size distribution rather than sample type for chemostat and detached biofilm. Intact biofilms were more tolerant to chlorine independent of species. Homogenization of samples led to significantly increased susceptibility in all biofilm samples as well as detached clusters for single-species B. cepacia, B. cepacia in coculture, and P. aeruginosa in coculture. The disinfection efficacy was also dependent on species composition; coculture was advantageous to the survival of both species when grown as a biofilm or as clusters detached from biofilm but, surprisingly, resulted in a lower disinfection tolerance when they were grown as a mixed planktonic culture. PMID:21856824

  17. Deactivation of Escherichia coli by the plasma needle

    NASA Astrophysics Data System (ADS)

    Sladek, R. E. J.; Stoffels, E.

    2005-06-01

    In this paper we present a parameter study on deactivation of Escherichia coli (E. coli) by means of a non-thermal plasma (plasma needle). The plasma needle is a small-sized (1 mm) atmospheric glow sustained by radio-frequency excitation. This plasma will be used to disinfect heat-sensitive objects; one of the intended applications is in vivo deactivation of dental bacteria: destruction of plaque and treatment of caries. We use E. coli films plated on agar dishes as a model system to optimize the conditions for bacterial destruction. Plasma power, treatment time and needle-to-sample distance are varied. Plasma treatment of E. coli films results in formation of a bacteria-free void with a size up to 12 mm. 104-105 colony forming units are already destroyed after 10 s of treatment. Prolongation of treatment time and usage of high powers do not significantly improve the destruction efficiency: short exposure at low plasma power is sufficient. Furthermore, we study the effects of temperature increase on the survival of E. coli and compare it with thermal effects of the plasma. The population of E. coli heated in a warm water bath starts to decrease at temperatures above 40°C. Sample temperature during plasma treatment has been monitored. The temperature can reach up to 60°C at high plasma powers and short needle-to-sample distances. However, thermal effects cannot account for bacterial destruction at low power conditions. For safe and efficient in vivo disinfection, the sample temperature should be kept low. Thus, plasma power and treatment time should not exceed 150 mW and 60 s, respectively.

  18. Evaluation of IOM personal sampler at different flow rates.

    PubMed

    Zhou, Yue; Cheng, Yung-Sung

    2010-02-01

    The Institute of Occupational Medicine (IOM) personal sampler is usually operated at a flow rate of 2.0 L/min, the rate at which it was designed and calibrated, for sampling the inhalable mass fraction of airborne particles in occupational environments. In an environment of low aerosol concentrations only small amounts of material are collected, and that may not be sufficient for analysis. Recently, a new sampling pump with a flow rate up to 15 L/min became available for personal samplers, with the potential of operating at higher flow rates. The flow rate of a Leland Legacy sampling pump, which operates at high flow rates, was evaluated and calibrated, and its maximum flow was found to be 10.6 L/min. IOM samplers were placed on a mannequin, and sampling was conducted in a large aerosol wind tunnel at wind speeds of 0.56 and 2.22 m/s. Monodisperse aerosols of oleic acid tagged with sodium fluorescein in the size range of 2 to 100 microm were used in the test. The IOM samplers were operated at flow rates of 2.0 and 10.6 L/min. Results showed that the IOM samplers mounted in the front of the mannequin had a higher sampling efficiency than those mounted at the side and back, regardless of the wind speed and flow rate. For the wind speed of 0.56 m/s, the direction-averaged (the average value of all orientations facing the wind direction) sampling efficiency of the samplers operated at 2.0 L/min was slightly higher than that of 10.6 L/min. For the wind speed of 2.22 m/s, the sampling efficiencies at both flow rates were similar for particles < 60 microm. The results also show that the IOM's sampling efficiency at these two different flow rates follows the inhalable mass curve for particles in the size range of 2 to 20 microm. The test results indicate that the IOM sampler can be used at higher flow rates.

  19. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    PubMed

    Kacmarczyk, Thadeous J; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  20. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection

    PubMed Central

    Kacmarczyk, Thadeous J.; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal. PMID:26066343

  1. Estimating fish swimming metrics and metabolic rates with accelerometers: the influence of sampling frequency.

    PubMed

    Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J

    2018-06-21

    Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. Forensic implications of the variation in morphology of marginal serrations on the teeth of the great white shark.

    PubMed

    Nambiar, P; Brown, K A; Bridges, T E

    1996-06-01

    The teeth of the Great White Shark have been examined to ascertain whether there is any commonality in the arrangement or number of the marginal serrations (peaks) or, indeed, whether individual sharks have a unique pattern of shapes or size of the peaks. The teeth of the White Shark are characteristic in size and shape with serrations along almost the entire mesial and distal margins. This study has revealed no consistent pattern of size or arrangement of the marginal serrations that was sufficiently characteristic within an individual shark to serve as a reliable index of identification of a tooth as originating from that particular shark. Nonetheless, the serrations are sufficiently distinctive to enable the potential identification of an individual tooth as having been the cause of a particular bitemark.

  3. Estimating relative decline in populations of subterranean termites (Isoptera: Rhinotermitidae) due to baiting.

    PubMed

    Evans, T A

    2001-12-01

    Although mark-recapture protocols produce inaccurate population estimates of termite colonies, they might be employed to estimate a relative change in colony size. This possibility was tested using two Australian, mound-building, wood-eating, subterranean Coptotermes species. Three different toxicants delivered in baits were used to decrease (but not eliminate) colony size, and a single mark-recapture protocol was used to estimate pre- and postbaiting population sizes. For both species, the numbers of termites retrieved from bait stations varied widely, resulting in no significant differences in the numbers of termites sampled between treatments in either the pre- or postbaiting protocols. There were significantly fewer termites sampled in all treatments, controls included, in the postbaiting protocol compared with the pre-, suggesting a seasonal change in forager numbers. The comparison of population estimates shows a large decrease in toxicant treated colonies compared with little change in control colonies, which suggests that estimating the relative decline in population size using mark-recapture protocols might to be possible. However, the change in population estimate was due entirely to the significantly lower recapture rate in the control colonies relative to the toxicant treated colonies, as numbers of unmarked termites did not change between treatments. The population estimates should be treated with caution because low recapture rates produce dubious population estimates and, in some cases, postbaiting mark-recapture population estimates could be much greater than those at prebaiting, despite consumption of bait in sufficient quantities to cause population decline. A possible interaction between fat-stain markers and toxicants should be investigated if mark-recapture population estimates are used. Alternative methods of population change are advised, along with other indirect measures.

  4. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices.

    PubMed

    Harrar, Solomon W; Kong, Xiaoli

    2015-03-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results.

  5. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices

    PubMed Central

    Harrar, Solomon W.; Kong, Xiaoli

    2015-01-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861

  6. Exposure to particulate matter in a mosque

    NASA Astrophysics Data System (ADS)

    Ocak, Yılmaz; Kılıçvuran, Akın; Eren, Aykut Balkan; Sofuoglu, Aysun; Sofuoglu, Sait C.

    2012-09-01

    Indoor air quality in mosques during prayers may be of concern for sensitive/susceptible sub-groups of the population. However, no indoor air pollutant levels of potentially toxic agents in mosques have been reported in the literature. This study measured PM concentrations in a mosque on Friday when the mid-day prayer always receives high attendance. Particle number and CO2 concentrations were measured on nine sampling days in three different campaigns before, during, and after prayer under three different cleaning schedules: vacuuming a week before, a day before, and on the morning of the prayer. In addition, daily PM2.5 concentrations were measured. Number concentrations in 0.5-1.0, 1.0-5.0, and > 5.0 μm diameter size ranges were monitored. In all campaigns the maximum number concentrations were observed on the most crowded days. The lowest number concentrations occurred when vacuuming was performed a day before the prayer day in two of the three size ranges considered. PM2.5 concentrations (four-hour samples that integrated before, during, and after the prayer) were comparable to the other indoor environments reported in the literature. CO2 concentrations suggested that ventilation was not sufficient in the mosque during the prayers. The results showed that better ventilation, a preventive cleaning strategy, and a more detailed study are needed.

  7. An approach for sample size determination of average bioequivalence based on interval estimation.

    PubMed

    Chiang, Chieh; Hsiao, Chin-Fu

    2017-03-30

    In 1992, the US Food and Drug Administration declared that two drugs demonstrate average bioequivalence (ABE) if the log-transformed mean difference of pharmacokinetic responses lies in (-0.223, 0.223). The most widely used approach for assessing ABE is the two one-sided tests procedure. More specifically, ABE is concluded when a 100(1 - 2α) % confidence interval for mean difference falls within (-0.223, 0.223). As known, bioequivalent studies are usually conducted by crossover design. However, in the case that the half-life of a drug is long, a parallel design for the bioequivalent study may be preferred. In this study, a two-sided interval estimation - such as Satterthwaite's, Cochran-Cox's, or Howe's approximations - is used for assessing parallel ABE. We show that the asymptotic joint distribution of the lower and upper confidence limits is bivariate normal, and thus the sample size can be calculated based on the asymptotic power so that the confidence interval falls within (-0.223, 0.223). Simulation studies also show that the proposed method achieves sufficient empirical power. A real example is provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Update on the effects of graded motor imagery and mirror therapy on complex regional pain syndrome type 1: A systematic review.

    PubMed

    Méndez-Rebolledo, Guillermo; Gatica-Rojas, Valeska; Torres-Cueco, Rafael; Albornoz-Verdugo, María; Guzmán-Muñoz, Eduardo

    2017-01-01

    Graded motor imagery (GMI) and mirror therapy (MT) is thought to improve pain in patients with complex regional pain syndrome (CRPS) types 1 and 2. However, the evidence is limited and analysis are not independent between types of CRPS. The purpose of this review was to analyze the effects of GMI and MT on pain in independent groups of patients with CRPS types 1 and 2. Searches for literature published between 1990 and 2016 were conducted in databases. Randomized controlled trials that compared GMI or MT with other treatments for CRPS types 1 and 2 were included. Six articles met the inclusion criteria and were classified from moderate to high quality. The total sample was composed of 171 participants with CRPS type 1. Three studies presented GMI with 3 components and three studies only used the MT. The studies were heterogeneous in terms of sample size and the disorders that triggered CRPS type 1. There were no trials that included participants with CRPS type 2. GMI and MT can improve pain in patients with CRPS type 1; however, there is not sufficient evidence to recommend these therapies over other treatments given the small size and heterogeneity of the studied population.

  9. Selection criteria for forested natural areas in New England, USA

    Treesearch

    William B. Leak; Mariko Yamasaki; Marie-Louise Smith; David T. Funk

    1994-01-01

    The selection of forested natural areas for research and educational purposes is discussed. Five factors are important: sufficient size; representation of typical communities and sites; documented disturbance histories; acceptable current condition in terms of age, tree size, and successional stage; and administrative feasibility.

  10. Colonization of a territory by a stochastic population under a strong Allee effect and a low immigration pressure

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Assaf, Michael; Meerson, Baruch

    2015-06-01

    We study the dynamics of colonization of a territory by a stochastic population at low immigration pressure. We assume a sufficiently strong Allee effect that introduces, in deterministic theory, a large critical population size for colonization. At low immigration rates, the average precolonization population size is small, thus invalidating the WKB approximation to the master equation. We circumvent this difficulty by deriving an exact zero-flux solution of the master equation and matching it with an approximate nonzero-flux solution of the pertinent Fokker-Planck equation in a small region around the critical population size. This procedure provides an accurate evaluation of the quasistationary probability distribution of population sizes in the precolonization state and of the mean time to colonization, for a wide range of immigration rates. At sufficiently high immigration rates our results agree with WKB results obtained previously. At low immigration rates the results can be very different.

  11. Colonization of a territory by a stochastic population under a strong Allee effect and a low immigration pressure.

    PubMed

    Be'er, Shay; Assaf, Michael; Meerson, Baruch

    2015-06-01

    We study the dynamics of colonization of a territory by a stochastic population at low immigration pressure. We assume a sufficiently strong Allee effect that introduces, in deterministic theory, a large critical population size for colonization. At low immigration rates, the average precolonization population size is small, thus invalidating the WKB approximation to the master equation. We circumvent this difficulty by deriving an exact zero-flux solution of the master equation and matching it with an approximate nonzero-flux solution of the pertinent Fokker-Planck equation in a small region around the critical population size. This procedure provides an accurate evaluation of the quasistationary probability distribution of population sizes in the precolonization state and of the mean time to colonization, for a wide range of immigration rates. At sufficiently high immigration rates our results agree with WKB results obtained previously. At low immigration rates the results can be very different.

  12. Deforestation and stream warming affect body size of Amazonian fishes.

    PubMed

    Ilha, Paulo; Schiesari, Luis; Yanagawa, Fernando I; Jankowski, KathiJo; Navas, Carlos A

    2018-01-01

    Declining body size has been suggested to be a universal response of organisms to rising temperatures, manifesting at all levels of organization and in a broad range of taxa. However, no study to date evaluated whether deforestation-driven warming could trigger a similar response. We studied changes in fish body size, from individuals to assemblages, in streams in Southeastern Amazonia. We first conducted sampling surveys to validate the assumption that deforestation promoted stream warming, and to test the hypothesis that warmer deforested streams had reduced fish body sizes relative to cooler forest streams. As predicted, deforested streams were up to 6 °C warmer and had fish 36% smaller than forest streams on average. This body size reduction could be largely explained by the responses of the four most common species, which were 43-55% smaller in deforested streams. We then conducted a laboratory experiment to test the hypothesis that stream warming as measured in the field was sufficient to cause a growth reduction in the dominant fish species in the region. Fish reared at forest stream temperatures gained mass, whereas those reared at deforested stream temperatures lost mass. Our results suggest that deforestation-driven stream warming is likely to be a relevant factor promoting observed body size reductions, although other changes in stream conditions, like reductions in organic matter inputs, can also be important. A broad scale reduction in fish body size due to warming may be occurring in streams throughout the Amazonian Arc of Deforestation, with potential implications for the conservation of Amazonian fish biodiversity and food supply for people around the Basin.

  13. Deforestation and stream warming affect body size of Amazonian fishes

    PubMed Central

    Yanagawa, Fernando I.; Jankowski, KathiJo; Navas, Carlos A.

    2018-01-01

    Declining body size has been suggested to be a universal response of organisms to rising temperatures, manifesting at all levels of organization and in a broad range of taxa. However, no study to date evaluated whether deforestation-driven warming could trigger a similar response. We studied changes in fish body size, from individuals to assemblages, in streams in Southeastern Amazonia. We first conducted sampling surveys to validate the assumption that deforestation promoted stream warming, and to test the hypothesis that warmer deforested streams had reduced fish body sizes relative to cooler forest streams. As predicted, deforested streams were up to 6 °C warmer and had fish 36% smaller than forest streams on average. This body size reduction could be largely explained by the responses of the four most common species, which were 43–55% smaller in deforested streams. We then conducted a laboratory experiment to test the hypothesis that stream warming as measured in the field was sufficient to cause a growth reduction in the dominant fish species in the region. Fish reared at forest stream temperatures gained mass, whereas those reared at deforested stream temperatures lost mass. Our results suggest that deforestation-driven stream warming is likely to be a relevant factor promoting observed body size reductions, although other changes in stream conditions, like reductions in organic matter inputs, can also be important. A broad scale reduction in fish body size due to warming may be occurring in streams throughout the Amazonian Arc of Deforestation, with potential implications for the conservation of Amazonian fish biodiversity and food supply for people around the Basin. PMID:29718960

  14. Two-sample binary phase 2 trials with low type I error and low sample size.

    PubMed

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A

    2017-04-30

    We address design of two-stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1  > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E⩾m, with two-sample rules of the form E - C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Development of a Standardized Approach for Environmental Microbiota Investigations related to Asthma Development in Children

    PubMed Central

    Fujimura, Kei E.; Rauch, Marcus; Matsui, Elizabeth; Iwai, Shoko; Calatroni, Agustin; Lynn, Henry; Mitchell, Herman; Johnson, Christine C.; Gern, James E.; Togias, Alkis; Boushey, Homer A.; Kennedy, Suzanne; Lynch, Susan V.

    2013-01-01

    Summary Standardized studies examining environmental microbial exposure in populations at risk for asthma are necessary to improve our understanding of the role this factor plays in disease development. Here we describe studies aimed at developing guidelines for high-resolution culture-independent microbiome profiling, using a phylogenetic microarray (PhyloChip), of house dust samples in a cohort collected as part of the NIH-funded Inner City Asthma Consortium (ICAC). We demonstrate that though extracted DNA concentrations varied across dust samples, the majority produced sufficient 16S rRNA to be profiled by the array. Comparison of array and 454-pyrosequencing performed in parallel on a subset of samples, illustrated that increasingly deeper sequencing efforts validated greater numbers of array-detected taxa. Community composition agreement across samples exhibited a hierarchy in concordance, with the highest level of agreement in replicate array profiles followed by samples collected from adjacent 1×1 m2 sites in the same room, adjacent sites with different sized sampling quadrants (1×1 and 2×2 m2), different sites within homes (living and bedroom) to lowest in living room samples collected from different homes. The guidelines for sample collection and processing in this pilot study extend beyond PhyloChip based studies of house-associated microbiota, and bear relevance for other microbiome profiling approaches such as next-generation sequencing. PMID:22975469

  16. Electrochemical sensing of total antioxidant capacity and polyphenol content in wine samples using amperometry online-coupled with microdialysis.

    PubMed

    Jakubec, Petr; Bancirova, Martina; Halouzka, Vladimir; Lojek, Antonin; Ciz, Milan; Denev, Petko; Cibicek, Norbert; Vacek, Jan; Vostalova, Jitka; Ulrichova, Jitka; Hrbac, Jan

    2012-08-15

    This work describes the method for total antioxidant capacity (TAC) and/or total content of phenolics (TCP) analysis in wines using microdialysis online-coupled with amperometric detection using a carbon microfiber working electrode. The system was tested on 10 selected wine samples, and the results were compared with total reactive antioxidant potential (TRAP), oxygen radical absorbance capacity (ORAC), and chemiluminescent determination of total antioxidant capacity (CL-TAC) methods using Trolox and catechin as standards. Microdialysis online-coupled with amperometric detection gives similar results to the widely used cyclic voltammetry methodology and closely correlates with ORAC and TRAP. The problem of electrode fouling is overcome by the introduction of an electrochemical cleaning step (1-2 min at the potential of 0 V vs Ag/AgCl). Such a procedure is sufficient to fully regenerate the electrode response for both red and white wine samples as well as catechin/Trolox standards. The appropriate size of microdialysis probes enables easy automation of the electrochemical TAC/TCP measurement using 96-well microtitration plates.

  17. Headspace concentrations of explosive vapors in containers designed for canine testing and training: theory, experiment, and canine trials.

    PubMed

    Lotspeich, Erica; Kitts, Kelley; Goodpaster, John

    2012-07-10

    It is a common misconception that the amount of explosive is the chief contributor to the quantity of vapor that is available to trained canines. In fact, this quantity (known as odor availability) depends not only on the amount of explosive material, but also the container volume, explosive vapor pressure and temperature. In order to better understand odor availability, headspace experiments were conducted and the results were compared to theory. The vapor-phase concentrations of three liquid explosives (nitromethane, nitroethane and nitropropane) were predicted using the Ideal Gas Law for containers of various volumes that are in use for canine testing. These predictions were verified through experiments that varied the amount of sample, the container size, and the temperature. These results demonstrated that the amount of sample that is needed to saturate different sized containers is small, predictable and agrees well with theory. In general, and as expected, once the headspace of a container is saturated, any subsequent increase in sample volume will not result in the release of more vapors. The ability of canines to recognize and alert to differing amounts of nitromethane has also been studied. In particular, it was found that the response of trained canines is independent of the amount of nitromethane present, provided it is a sufficient quantity to saturate the container in which it is held. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Morphometric variation in the papionin muzzle and the biochronology of the South African Plio-Pleistocene karst cave deposits.

    PubMed

    Gilbert, Christopher C; Grine, Frederick E

    2010-03-01

    Papionin monkeys are widespread, relatively common members of Plio-Pleistocene faunal assemblages across Africa. For these reasons, papionin taxa have been used as biochronological indicators by which to infer the ages of the South African karst cave deposits. A recent morphometric study of South African fossil papionin muzzle shape concluded that its variation attests to a substantial and greater time depth for these sites than is generally estimated. This inference is significant, because accurate dating of the South African cave sites is critical to our knowledge of hominin evolution and mammalian biogeographic history. We here report the results of a comparative analysis of extant papionin monkeys by which variability of the South African fossil papionins may be assessed. The muzzles of 106 specimens representing six extant papionin genera were digitized and interlandmark distances were calculated. Results demonstrate that the overall amount of morphological variation present within the fossil assemblage fits comfortably within the range exhibited by the extant sample. We also performed a statistical experiment to assess the limitations imposed by small sample sizes, such as typically encountered in the fossil record. Results suggest that 15 specimens are sufficient to accurately represent the population mean for a given phenotype, but small sample sizes are insufficient to permit the accurate estimation of the population standard deviation, variance, and range. The suggestion that the muzzle morphology of fossil papionins attests to a considerable and previously unrecognized temporal depth of the South African karst cave sites is unwarranted.

  19. Focus of a multilayer Laue lens with an aperture of 102 microns determined by ptychography at beamline 1-BM at the Advanced Photon Source

    NASA Astrophysics Data System (ADS)

    Macrander, Albert; Wojcik, Michael; Maser, Jörg; Bouet, Nathalie; Conley, Raymond

    2017-09-01

    Ptychography was used to determine the focus of a Multilayer-Laue-Lens (MLL) at beamline 1-BM at the Advanced Photon Source (APS). The MLL had a record aperture of 102 microns with 15170 layers. The measurements were made at 12 keV. The focal length was 9.6 mm, and the outer-most zone was 4 nm thick. MLLs with ever larger apertures are under continuous development since ever longer focal lengths, ever larger working distances, and ever increased flux in the focus are desired. A focus size of 25 nm was determined by ptychographic phase retrieval from a gold grating sample with 1 micron lines and spaces over 3.0 microns horizontal distance. The MLL was set to focus in the horizontal plane of the bending magnet beamline. A CCD with 13.0 micron pixel size positioned 1.13 m downstream of the sample was used to collect the transmitted intensity distribution. The beam incident on the MLL covered the whole 102 micron aperture in the horizontal focusing direction and 20 microns in the vertical direction. 160 iterations of the difference map algorithm were sufficient to obtain a reconstructed image of the sample. The present work highlights the utility of a bending magnet source at the APS for performing coherence-based experiments. Use of ptychography at 1-BM on MLL optics opens the way to study diffraction-limited imaging of other hard x-ray optics.

  20. Ancient DNA from marine mammals: studying long-lived species over ecological and evolutionary timescales.

    PubMed

    Foote, Andrew D; Hofreiter, Michael; Morin, Phillip A

    2012-01-20

    Marine mammals have long generation times and broad, difficult to sample distributions, which makes inferring evolutionary and demographic changes using field studies of extant populations challenging. However, molecular analyses from sub-fossil or historical materials of marine mammals such as bone, tooth, baleen, skin, fur, whiskers and scrimshaw using ancient DNA (aDNA) approaches provide an opportunity for investigating such changes over evolutionary and ecological timescales. Here, we review the application of aDNA techniques to the study of marine mammals. Most of the studies have focused on detecting changes in genetic diversity following periods of exploitation and environmental change. To date, these studies have shown that even small sample sizes can provide useful information on historical genetic diversity. Ancient DNA has also been used in investigations of changes in distribution and range of marine mammal species; we review these studies and discuss the limitations of such 'presence only' studies. Combining aDNA data with stable isotopes can provide further insights into changes in ecology and we review past studies and suggest future potential applications. We also discuss studies reconstructing inter- and intra-specific phylogenies from aDNA sequences and discuss how aDNA sequences could be used to estimate mutation rates. Finally, we highlight some of the problems of aDNA studies on marine mammals, such as obtaining sufficient sample sizes and calibrating for the marine reservoir effect when radiocarbon-dating such wide-ranging species. Copyright © 2011 Elsevier GmbH. All rights reserved.

  1. Learning and Job Satisfaction. Symposium.

    ERIC Educational Resources Information Center

    2002

    This symposium is comprised of three papers on learning and job satisfaction. "The Relationship Between Workplace Learning and Job Satisfaction in United States Small to Mid-Sized Businesses" (Robert W. Rowden) reports findings that revealed sufficient evidence to conclude that learning is pervasive in the small to mid-sized businesses…

  2. 24 CFR 984.105 - Minimum program size.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DEVELOPMENT SECTION 8 AND PUBLIC HOUSING FAMILY SELF-SUFFICIENCY PROGRAM General § 984.105 Minimum program... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Minimum program size. 984.105 Section 984.105 Housing and Urban Development Regulations Relating to Housing and Urban Development...

  3. Acoustic Enrichment of Extracellular Vesicles from Biological Fluids.

    PubMed

    Ku, Anson; Lim, Hooi Ching; Evander, Mikael; Lilja, Hans; Laurell, Thomas; Scheding, Stefan; Ceder, Yvonne

    2018-06-11

    Extracellular vesicles (EVs) have emerged as a rich source of biomarkers providing diagnostic and prognostic information in diseases such as cancer. Large-scale investigations into the contents of EVs in clinical cohorts are warranted, but a major obstacle is the lack of a rapid, reproducible, efficient, and low-cost methodology to enrich EVs. Here, we demonstrate the applicability of an automated acoustic-based technique to enrich EVs, termed acoustic trapping. Using this technology, we have successfully enriched EVs from cell culture conditioned media and urine and blood plasma from healthy volunteers. The acoustically trapped samples contained EVs ranging from exosomes to microvesicles in size and contained detectable levels of intravesicular microRNAs. Importantly, this method showed high reproducibility and yielded sufficient quantities of vesicles for downstream analysis. The enrichment could be obtained from a sample volume of 300 μL or less, an equivalent to 30 min of enrichment time, depending on the sensitivity of downstream analysis. Taken together, acoustic trapping provides a rapid, automated, low-volume compatible, and robust method to enrich EVs from biofluids. Thus, it may serve as a novel tool for EV enrichment from large number of samples in a clinical setting with minimum sample preparation.

  4. Human metabolic profiles are stably controlled by genetic and environmental variation

    PubMed Central

    Nicholson, George; Rantalainen, Mattias; Maher, Anthony D; Li, Jia V; Malmodin, Daniel; Ahmadi, Kourosh R; Faber, Johan H; Hallgrímsdóttir, Ingileif B; Barrett, Amy; Toft, Henrik; Krestyaninova, Maria; Viksna, Juris; Neogi, Sudeshna Guha; Dumas, Marc-Emmanuel; Sarkans, Ugis; The MolPAGE Consortium; Silverman, Bernard W; Donnelly, Peter; Nicholson, Jeremy K; Allen, Maxine; Zondervan, Krina T; Lindon, John C; Spector, Tim D; McCarthy, Mark I; Holmes, Elaine; Baunsgaard, Dorrit; Holmes, Chris C

    2011-01-01

    1H Nuclear Magnetic Resonance spectroscopy (1H NMR) is increasingly used to measure metabolite concentrations in sets of biological samples for top-down systems biology and molecular epidemiology. For such purposes, knowledge of the sources of human variation in metabolite concentrations is valuable, but currently sparse. We conducted and analysed a study to create such a resource. In our unique design, identical and non-identical twin pairs donated plasma and urine samples longitudinally. We acquired 1H NMR spectra on the samples, and statistically decomposed variation in metabolite concentration into familial (genetic and common-environmental), individual-environmental, and longitudinally unstable components. We estimate that stable variation, comprising familial and individual-environmental factors, accounts on average for 60% (plasma) and 47% (urine) of biological variation in 1H NMR-detectable metabolite concentrations. Clinically predictive metabolic variation is likely nested within this stable component, so our results have implications for the effective design of biomarker-discovery studies. We provide a power-calculation method which reveals that sample sizes of a few thousand should offer sufficient statistical precision to detect 1H NMR-based biomarkers quantifying predisposition to disease. PMID:21878913

  5. Versatile, ultra-low sample volume gas analyzer using a rapid, broad-tuning ECQCL and a hollow fiber gas cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kriesel, Jason M.; Makarem, Camille N.; Phillips, Mark C.

    We describe a versatile mid-infrared (Mid-IR) spectroscopy system developed to measure the concentration of a wide range of gases with an ultra-low sample size. The system combines a rapidly-swept external cavity quantum cascade laser (ECQCL) with a hollow fiber gas cell. The ECQCL has sufficient spectral resolution and reproducibility to measure gases with narrow features (e.g., water, methane, ammonia, etc.), and also the spectral tuning range needed to measure volatile organic compounds (VOCs), (e.g., aldehydes, ketones, hydrocarbons), sulfur compounds, chlorine compounds, etc. The hollow fiber is a capillary tube having an internal reflective coating optimized for transmitting the Mid-IR lasermore » beam to a detector. Sample gas introduced into the fiber (e.g., internal volume = 0.6 ml) interacts strongly with the laser beam, and despite relatively modest path lengths (e.g., L ~ 3 m), the requisite quantity of sample needed for sensitive measurements can be significantly less than what is required using conventional IR laser spectroscopy systems. Example measurements are presented including quantification of VOCs relevant for human breath analysis with a sensitivity of ~2 picomoles at a 1 Hz data rate.« less

  6. Versatile, ultra-low sample volume gas analyzer using a rapid, broad-tuning ECQCL and a hollow fiber gas cell

    NASA Astrophysics Data System (ADS)

    Kriesel, Jason M.; Makarem, Camille N.; Phillips, Mark C.; Moran, James J.; Coleman, Max L.; Christensen, Lance E.; Kelly, James F.

    2017-05-01

    We describe a versatile mid-infrared (Mid-IR) spectroscopy system developed to measure the concentration of a wide range of gases with an ultra-low sample size. The system combines a rapidly-swept external cavity quantum cascade laser (ECQCL) with a hollow fiber gas cell. The ECQCL has sufficient spectral resolution and reproducibility to measure gases with narrow features (e.g., water, methane, ammonia, etc.), and also the spectral tuning range needed to measure volatile organic compounds (VOCs), (e.g., aldehydes, ketones, hydrocarbons), sulfur compounds, chlorine compounds, etc. The hollow fiber is a capillary tube having an internal reflective coating optimized for transmitting the Mid-IR laser beam to a detector. Sample gas introduced into the fiber (e.g., internal volume = 0.6 ml) interacts strongly with the laser beam, and despite relatively modest path lengths (e.g., L 3 m), the requisite quantity of sample needed for sensitive measurements can be significantly less than what is required using conventional IR laser spectroscopy systems. Example measurements are presented including quantification of VOCs relevant for human breath analysis with a sensitivity of 2 picomoles at a 1 Hz data rate.

  7. Development of an enumeration method for arsenic methylating bacteria from mixed culture samples.

    PubMed

    Islam, S M Atiqul; Fukushi, Kensuke; Yamamoto, Kazuo

    2005-12-01

    Bacterial methylation of arsenic converts inorganic arsenic into volatile and non-volatile methylated species. It plays an important role in the arsenic cycle in the environment. Despite the potential environmental significance of AsMB, an assessment of their population size and activity remains unknown. This study has now established a protocol for enumeration of AsMB by means of the anaerobic-culture-tube, most probable number (MPN) method. Direct detection of volatile arsenic species is then done by GC-MS. This method is advantageous as it can simultaneously enumerate AsMB and acetate and formate-utilizing methanogens. The incubation time for this method was determined to be 6 weeks, sufficient time for AsMB growth.

  8. Toward detecting California shrubland canopy chemistry with AIS data

    NASA Technical Reports Server (NTRS)

    Price, Curtis V.; Westman, Walter E.

    1987-01-01

    Airborne Imaging Spectrometer (AIS)-2 data of coastal sage scrub vegetation were examined for fine spectral features that might be used to predict concentrations of certain canopy chemical constituents. A Fourier notch filter was applied to the AIS data and the TREE and ROCK mode spectra were ratioed to a flat field. Portions of the resulting spectra resemble spectra for plant cellulose and starch in that both show reduced reflectance at 2100 and 2270 nm. The latter are regions of absorption of energy by organic bonds found in starch and cellulose. Whether the relationship is sufficient to predict the concentration of these chemicals from AIS spectra will require testing of the predictive ability of these wavebands with large field sample sizes.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Protat, A; Young, S

    The objective of this field campaign was to evaluate the performance of the new Leosphere R-MAN 510 lidar, procured by the Australian Bureau of Meteorology, by testing it against the MicroPulse Lidar (MPL) and Raman lidars, at the Darwin Atmospheric Radiation Measurement (ARM) site. This lidar is an eye-safe (355 nm), turn-key mini Raman lidar, which allows for the detection of aerosols and cloud properties, and the retrieval of particulate extinction profiles. To accomplish this evaluation, the R-MAN 510 lidar has been operated at the Darwin ARM site, next to the MPL, Raman lidar, and Vaisala ceilometer (VCEIL) for threemore » months (from 20 January 2013 to 20 April 2013) in order to collect a sufficient sample size for statistical comparisons.« less

  10. [Inaccurate information about the size of the penis in the Democratic Republic of the Congo: about 21 information sources].

    PubMed

    Mulenga, Philippe Cilundika; Kazadi, Alex Bukasa

    2016-01-01

    Penis size is a huge topic of anxiety for a lot of men. Some of them are unhappy with their penis size as shown in the study conducted by Tiggemann in 2008. There are relatively few studies on erect penis size. This may reflect cultural taboos of researchers or doctors interacting with men who are in a state of sexual arousal. On the other hand, it is important for people who announce details on penis size to give the average penis size first and then sizes suggested by the researchers. We performed a cross-sectional survey in the two major urban centres of the Democratic Republic of Congo namely Kinshasa and Lubumbashi over a period of two years from May 2014 to May 2016. A total of 21 information sources constituted our sample, 8 in Kinshasa and 13 in Lubumbashi. We found it sufficient because in our culture discussing about sexual matter is rare. The parameters studied were: the nature of the source, the accuracy of the measurement method, the presence of bibliographical reference, the announced penis size. The majority of information sources used were radio or television broadcastings (23,8%); this can be explained by the fact that there are an increasing number of radio and television stations in our country and especially in large cities. With regard to accuracy of information about penis measurement method when sharing the message about penis size, our study showed that the majority of information sources did not indicate it when they announced penis size to the public (85,7%). Several sources did not report bibliographical references (57,1%). Announced data analysis on penis size showed that the average penis size was: 14 cm (28,6%), 15 cm (23,8%) and 15-20 cm (19%). All these results are intended to offer a warning to all players responsible for diffusing information on sexual health (penis size): scientific rigor consists in seeking information from reliable sources.

  11. Size Class Distribution of Quercus engelmannii (Engelmann Oak) on the Santa Rosa Plateau, Riverside County, California

    Treesearch

    Earl W. Lathrop; Chris Osborne; Anna Rochester; Kevin Yeung; Samuel Soret; Rochelle Hopper

    1991-01-01

    Size class distribution of Quercus engelmannii (Engelmann oak) on the Santa Rosa Plateau was studied to understand whether current recruitment of young oaks is sufficient to maintain the population in spite of high natural mortality and impacts of development in some portions of the plateau woodland. Sapling-size oaks (1-10 cm dbh) made up 5.56 pct...

  12. Seabed mapping and characterization of sediment variability using the usSEABED data base

    USGS Publications Warehouse

    Goff, J.A.; Jenkins, C.J.; Jeffress, Williams S.

    2008-01-01

    We present a methodology for statistical analysis of randomly located marine sediment point data, and apply it to the US continental shelf portions of usSEABED mean grain size records. The usSEABED database, like many modern, large environmental datasets, is heterogeneous and interdisciplinary. We statistically test the database as a source of mean grain size data, and from it provide a first examination of regional seafloor sediment variability across the entire US continental shelf. Data derived from laboratory analyses ("extracted") and from word-based descriptions ("parsed") are treated separately, and they are compared statistically and deterministically. Data records are selected for spatial analysis by their location within sample regions: polygonal areas defined in ArcGIS chosen by geography, water depth, and data sufficiency. We derive isotropic, binned semivariograms from the data, and invert these for estimates of noise variance, field variance, and decorrelation distance. The highly erratic nature of the semivariograms is a result both of the random locations of the data and of the high level of data uncertainty (noise). This decorrelates the data covariance matrix for the inversion, and largely prevents robust estimation of the fractal dimension. Our comparison of the extracted and parsed mean grain size data demonstrates important differences between the two. In particular, extracted measurements generally produce finer mean grain sizes, lower noise variance, and lower field variance than parsed values. Such relationships can be used to derive a regionally dependent conversion factor between the two. Our analysis of sample regions on the US continental shelf revealed considerable geographic variability in the estimated statistical parameters of field variance and decorrelation distance. Some regional relationships are evident, and overall there is a tendency for field variance to be higher where the average mean grain size is finer grained. Surprisingly, parsed and extracted noise magnitudes correlate with each other, which may indicate that some portion of the data variability that we identify as "noise" is caused by real grain size variability at very short scales. Our analyses demonstrate that by applying a bias-correction proxy, usSEABED data can be used to generate reliable interpolated maps of regional mean grain size and sediment character. 

  13. Non-blackbody Disks Can Help Explain Inferred AGN Accretion Disk Sizes

    NASA Astrophysics Data System (ADS)

    Hall, Patrick B.; Sarrouh, Ghassan T.; Horne, Keith

    2018-02-01

    If the atmospheric density {ρ }atm} in the accretion disk of an active galactic nucleus (AGN) is sufficiently low, scattering in the atmosphere can produce a non-blackbody emergent spectrum. For a given bolometric luminosity, at ultraviolet and optical wavelengths such disks have lower fluxes and apparently larger sizes as compared to disks that emit as blackbodies. We show that models in which {ρ }atm} is a sufficiently low fixed fraction of the interior density ρ can match the AGN STORM observations of NGC 5548 but produce disk spectral energy distributions that peak at shorter wavelengths than observed in luminous AGN in general. Thus, scattering atmospheres can contribute to the explanation for large inferred AGN accretion disk sizes but are unlikely to be the only contributor. In the appendix section, we present unified equations for the interior ρ and T in gas pressure-dominated regions of a thin accretion disk.

  14. Impact of geometrical properties on permeability and fluid phase distribution in porous media

    NASA Astrophysics Data System (ADS)

    Lehmann, P.; Berchtold, M.; Ahrenholz, B.; Tölke, J.; Kaestner, A.; Krafczyk, M.; Flühler, H.; Künsch, H. R.

    2008-09-01

    To predict fluid phase distribution in porous media, the effect of geometric properties on flow processes must be understood. In this study, we analyze the effect of volume, surface, curvature and connectivity (the four Minkowski functionals) on the hydraulic conductivity and the water retention curve. For that purpose, we generated 12 artificial structures with 800 3 voxels (the units of a 3D image) and compared them with a scanned sand sample of the same size. The structures were generated with a Boolean model based on a random distribution of overlapping ellipsoids whose size and shape were chosen to fulfill the criteria of the measured functionals. The pore structure of sand material was mapped with X-rays from synchrotrons. To analyze the effect of geometry on water flow and fluid distribution we carried out three types of analysis: Firstly, we computed geometrical properties like chord length, distance from the solids, pore size distribution and the Minkowski functionals as a function of pore size. Secondly, the fluid phase distribution as a function of the applied pressure was calculated with a morphological pore network model. Thirdly, the permeability was determined using a state-of-the-art lattice-Boltzmann method. For the simulated structure with the true Minkowski functionals the pores were larger and the computed air-entry value of the artificial medium was reduced to 85% of the value obtained from the scanned sample. The computed permeability for the geometry with the four fitted Minkowski functionals was equal to the permeability of the scanned image. The permeability was much more sensitive to the volume and surface than to curvature and connectivity of the medium. We conclude that the Minkowski functionals are not sufficient to characterize the geometrical properties of a porous structure that are relevant for the distribution of two fluid phases. Depending on the procedure to generate artificial structures with predefined Minkowski functionals, structures differing in pore size distribution can be obtained.

  15. Sample-to-answer palm-sized nucleic acid testing device towards low-cost malaria mass screening.

    PubMed

    Choi, Gihoon; Prince, Theodore; Miao, Jun; Cui, Liwang; Guan, Weihua

    2018-05-19

    The effectiveness of malaria screening and treatment highly depends on the low-cost access to the highly sensitive and specific malaria test. We report a real-time fluorescence nucleic acid testing device for malaria field detection with automated and scalable sample preparation capability. The device consists a compact analyzer and a disposable microfluidic reagent compact disc. The parasite DNA sample preparation and subsequent real-time LAMP detection were seamlessly integrated on a single microfluidic compact disc, driven by energy efficient non-centrifuge based magnetic field interactions. Each disc contains four parallel testing units which could be configured either as four identical tests or as four species-specific tests. When configured as species-specific tests, it could identify two of the most life-threatening malaria species (P. falciparum and P. vivax). The NAT device is capable of processing four samples simultaneously within 50 min turnaround time. It achieves a detection limit of ~0.5 parasites/µl for whole blood, sufficient for detecting asymptomatic parasite carriers. The combination of the sensitivity, specificity, cost, and scalable sample preparation suggests the real-time fluorescence LAMP device could be particularly useful for malaria screening in the field settings. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Ghost Particle Velocimetry: Accurate 3D Flow Visualization Using Standard Lab Equipment

    NASA Astrophysics Data System (ADS)

    Buzzaccaro, Stefano; Secchi, Eleonora; Piazza, Roberto

    2013-07-01

    We describe and test a new approach to particle velocimetry, based on imaging and cross correlating the scattering speckle pattern generated on a near-field plane by flowing tracers with a size far below the diffraction limit, which allows reconstructing the velocity pattern in microfluidic channels without perturbing the flow. As a matter of fact, adding tracers is not even strictly required, provided that the sample displays sufficiently refractive-index fluctuations. For instance, phase separation in liquid mixtures in the presence of shear is suitable to be directly investigated by this “ghost particle velocimetry” technique, which just requires a microscope with standard lamp illumination equipped with a low-cost digital camera. As a further bonus, the peculiar spatial coherence properties of the illuminating source, which displays a finite longitudinal coherence length, allows for a 3D reconstruction of the profile with a resolution of few tenths of microns and makes the technique suitable to investigate turbid samples with negligible multiple scattering effects.

  17. The sensitivity of catchment hypsometry and hypsometric properties to DEM resolution and polynomial order

    NASA Astrophysics Data System (ADS)

    Liffner, Joel W.; Hewa, Guna A.; Peel, Murray C.

    2018-05-01

    Derivation of the hypsometric curve of a catchment, and properties relating to that curve, requires both use of topographical data (commonly in the form of a Digital Elevation Model - DEM), and the estimation of a functional representation of that curve. An early investigation into catchment hypsometry concluded 3rd order polynomials sufficiently describe the hypsometric curve, without the consideration of higher order polynomials, or the sensitivity of hypsometric properties relating to the curve. Another study concluded the hypsometric integral (HI) is robust against changes in DEM resolution, a conclusion drawn from a very limited sample size. Conclusions from these earlier studies have resulted in the adoption of methods deemed to be "sufficient" in subsequent studies, in addition to assumptions that the robustness of the HI extends to other hypsometric properties. This study investigates and demonstrates the sensitivity of hypsometric properties to DEM resolution, DEM type and polynomial order through assessing differences in hypsometric properties derived from 417 catchments and sub-catchments within South Australia. The sensitivity of hypsometric properties across DEM types and polynomial orders is found to be significant, which suggests careful consideration of the methods chosen to derive catchment hypsometric information is required.

  18. Protein structure determination by electron diffraction using a single three-dimensional nanocrystal.

    PubMed

    Clabbers, M T B; van Genderen, E; Wan, W; Wiegers, E L; Gruene, T; Abrahams, J P

    2017-09-01

    Three-dimensional nanometre-sized crystals of macromolecules currently resist structure elucidation by single-crystal X-ray crystallography. Here, a single nanocrystal with a diffracting volume of only 0.14 µm 3 , i.e. no more than 6 × 10 5 unit cells, provided sufficient information to determine the structure of a rare dimeric polymorph of hen egg-white lysozyme by electron crystallography. This is at least an order of magnitude smaller than was previously possible. The molecular-replacement solution, based on a monomeric polyalanine model, provided sufficient phasing power to show side-chain density, and automated model building was used to reconstruct the side chains. Diffraction data were acquired using the rotation method with parallel beam diffraction on a Titan Krios transmission electron microscope equipped with a novel in-house-designed 1024 × 1024 pixel Timepix hybrid pixel detector for low-dose diffraction data collection. Favourable detector characteristics include the ability to accurately discriminate single high-energy electrons from X-rays and count them, fast readout to finely sample reciprocal space and a high dynamic range. This work, together with other recent milestones, suggests that electron crystallography can provide an attractive alternative in determining biological structures.

  19. Protein structure determination by electron diffraction using a single three-dimensional nanocrystal

    PubMed Central

    Clabbers, M. T. B.; van Genderen, E.; Wiegers, E. L.; Gruene, T.; Abrahams, J. P.

    2017-01-01

    Three-dimensional nanometre-sized crystals of macromolecules currently resist structure elucidation by single-crystal X-ray crystallography. Here, a single nanocrystal with a diffracting volume of only 0.14 µm3, i.e. no more than 6 × 105 unit cells, provided sufficient information to determine the structure of a rare dimeric polymorph of hen egg-white lysozyme by electron crystallography. This is at least an order of magnitude smaller than was previously possible. The molecular-replacement solution, based on a monomeric polyalanine model, provided sufficient phasing power to show side-chain density, and automated model building was used to reconstruct the side chains. Diffraction data were acquired using the rotation method with parallel beam diffraction on a Titan Krios transmission electron microscope equipped with a novel in-house-designed 1024 × 1024 pixel Timepix hybrid pixel detector for low-dose diffraction data collection. Favourable detector characteristics include the ability to accurately discriminate single high-energy electrons from X-rays and count them, fast readout to finely sample reciprocal space and a high dynamic range. This work, together with other recent milestones, suggests that electron crystallography can provide an attractive alternative in determining biological structures. PMID:28876237

  20. The assessment of data sources for influenza virologic surveillance in New York State.

    PubMed

    Escuyer, Kay L; Waters, Christine L; Gowie, Donna L; Maxted, Angie M; Farrell, Gregory M; Fuschino, Meghan E; St George, Kirsten

    2017-03-01

    Following the 2013 USA release of the Influenza Virologic Surveillance Right Size Roadmap, the New York State Department of Health (NYSDOH) embarked on an evaluation of data sources for influenza virologic surveillance. To assess NYS data sources, additional to data generated by the state public health laboratory (PHL), which could enhance influenza surveillance at the state and national level. Potential sources of laboratory test data for influenza were analyzed for quantity and quality. Computer models, designed to assess sample sizes and the confidence of data for statistical representation of influenza activity, were used to compare PHL test data to results from clinical and commercial laboratories, reported between June 8, 2013 and May 31, 2014. Sample sizes tested for influenza at the state PHL were sufficient for situational awareness surveillance with optimal confidence levels, only during peak weeks of the influenza season. Influenza data pooled from NYS PHLs and clinical laboratories generated optimal confidence levels for situational awareness throughout the influenza season. For novel influenza virus detection in NYS, combined real-time (rt) RT-PCR data from state and regional PHLs achieved ≥85% confidence during peak influenza activity, and ≥95% confidence for most of low season and all of off-season. In NYS, combined data from clinical, commercial, and public health laboratories generated optimal influenza surveillance for situational awareness throughout the season. Statistical confidence for novel virus detection, which is reliant on only PHL data, was achieved for most of the year. © 2016 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.

  1. Does the 'old bag' make a good 'wind bag'?: Comparison of four fabrics commonly used as exclusion bags in studies of pollination and reproductive biology.

    PubMed

    Neal, Paul R; Anderson, Gregory J

    2004-05-01

    Fabrics used in pollination bags may exclude pollen carried by biotic vectors, but have varying degrees of permeability to wind-borne pollen. The permeability of bags to wind-borne pollen may have important consequences in studies of pollination and reproductive biology. The permeability of four fabrics commonly used in the construction of pollination bags was examined. Deposition of wind-borne pollen on horizontally and vertically oriented microscope slides was assessed on slides enclosed in pollination bags, as well as on control slides. It was found that the permeability of fabrics to wind-borne pollen, as measured by deposition on both horizontally and vertically oriented slides, decreased with pore size. However, deposition on horizontal slides was always greater than on vertical slides for a given fabric; this could manifest itself as differential success of pollination of flowers in bags-dependent on flower orientation. Obviously, bags with mesh size smaller than most pollen grains are impermeable to pollen. However, material for such bags is very expensive. In addition, it was also observed that bags with even moderately small pore size, such as pores (approx. 200 microm) in twisted fibre cotton muslin, offered highly significant barriers to passage of wind-borne pollen. Such bags are sufficiently effective in most large-sample-size reproductive biology studies.

  2. Calibration of the clumped isotope thermometer for planktic foraminifers

    NASA Astrophysics Data System (ADS)

    Meinicke, N.; Ho, S. L.; Nürnberg, D.; Tripati, A. K.; Jansen, E.; Dokken, T.; Schiebel, R.; Meckler, A. N.

    2017-12-01

    Many proxies for past ocean temperature suffer from secondary influences or require species-specific calibrations that might not be applicable on longer time scales. Being thermodynamically based and thus independent of seawater composition, clumped isotopes in carbonates (Δ47) have the potential to circumvent such issues affecting other proxies and provide reliable temperature reconstructions far back in time and in unknown settings. Although foraminifers are commonly used for paleoclimate reconstructions, their use for clumped isotope thermometry has been hindered so far by large sample-size requirements. Existing calibration studies suggest that data from a variety of foraminifer species agree with synthetic carbonate calibrations (Tripati, et al., GCA, 2010; Grauel, et al., GCA, 2013). However, these studies did not include a sufficient number of samples to fully assess the existence of species-specific effects, and data coverage was especially sparse in the low temperature range (<10 °C). To expand the calibration database of clumped isotopes in planktic foraminifers, especially for colder temperatures (<10°C), we present new Δ47 data analysed on 14 species of planktic foraminifers from 13 sites, covering a temperature range of 1-29 °C. Our method allows for analysis of smaller sample sizes (3-5 mg), hence also the measurement of multiple species from the same samples. We analyzed surface-dwelling ( 0-50 m) species and deep-dwelling (habitat depth up to several hundred meters) planktic foraminifers from the same sites to evaluate species-specific effects and to assess the feasibility of temperature reconstructions for different water depths. We also assess the effects of different techniques in estimating foraminifer calcification temperature on the calibration. Finally, we compare our calibration to existing clumped isotope calibrations. Our results confirm previous findings that indicate no species-specific effects on the Δ47-temperature relationship measured in planktic foraminifers.

  3. Predicting nitrate discharge dynamics in mesoscale catchments using the lumped StreamGEM model and Bayesian parameter inference

    NASA Astrophysics Data System (ADS)

    Woodward, Simon James Roy; Wöhling, Thomas; Rode, Michael; Stenger, Roland

    2017-09-01

    The common practice of infrequent (e.g., monthly) stream water quality sampling for state of the environment monitoring may, when combined with high resolution stream flow data, provide sufficient information to accurately characterise the dominant nutrient transfer pathways and predict annual catchment yields. In the proposed approach, we use the spatially lumped catchment model StreamGEM to predict daily stream flow and nitrate concentration (mg L-1 NO3-N) in four contrasting mesoscale headwater catchments based on four years of daily rainfall, potential evapotranspiration, and stream flow measurements, and monthly or daily nitrate concentrations. Posterior model parameter distributions were estimated using the Markov Chain Monte Carlo sampling code DREAMZS and a log-likelihood function assuming heteroscedastic, t-distributed residuals. Despite high uncertainty in some model parameters, the flow and nitrate calibration data was well reproduced across all catchments (Nash-Sutcliffe efficiency against Log transformed data, NSL, in the range 0.62-0.83 for daily flow and 0.17-0.88 for nitrate concentration). The slight increase in the size of the residuals for a separate validation period was considered acceptable (NSL in the range 0.60-0.89 for daily flow and 0.10-0.74 for nitrate concentration, excluding one data set with limited validation data). Proportions of flow and nitrate discharge attributed to near-surface, fast seasonal groundwater and slow deeper groundwater were consistent with expectations based on catchment geology. The results for the Weida Stream in Thuringia, Germany, using monthly as opposed to daily nitrate data were, for all intents and purposes, identical, suggesting that four years of monthly nitrate sampling provides sufficient information for calibration of the StreamGEM model and prediction of catchment dynamics. This study highlights the remarkable effectiveness of process based, spatially lumped modelling with commonly available monthly stream sample data, to elucidate high resolution catchment function, when appropriate calibration methods are used that correctly handle the inherent uncertainties.

  4. 27 CFR 24.111 - Description of premises.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... description will be by directions and distances, in feet and inches (or hundredths of feet), with sufficient... described. Each building on wine premises will be described as to size, construction, and use. Buildings on wine premises which will not be used for wine operations will be described only as to size and use. If...

  5. 27 CFR 24.111 - Description of premises.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... description will be by directions and distances, in feet and inches (or hundredths of feet), with sufficient... described. Each building on wine premises will be described as to size, construction, and use. Buildings on wine premises which will not be used for wine operations will be described only as to size and use. If...

  6. Assessment of Different Biofilter Media Particle Sizes for Ammonia Removal Optimization

    USDA-ARS?s Scientific Manuscript database

    The main objective of this study is to determine a range of particle sizes that provides low resistance to the air flow but also sufficient surface area for microbial attachment, which is needed for higher biofiltration efficiency. This will be done by assessing ammonia removal and pressure drop in ...

  7. Tropical forest carbon balance: effects of field- and satellite-based mortality regimes on the dynamics and the spatial structure of Central Amazon forest biomass

    NASA Astrophysics Data System (ADS)

    Di Vittorio, Alan V.; Negrón-Juárez, Robinson I.; Higuchi, Niro; Chambers, Jeffrey Q.

    2014-03-01

    Debate continues over the adequacy of existing field plots to sufficiently capture Amazon forest dynamics to estimate regional forest carbon balance. Tree mortality dynamics are particularly uncertain due to the difficulty of observing large, infrequent disturbances. A recent paper (Chambers et al 2013 Proc. Natl Acad. Sci. 110 3949-54) reported that Central Amazon plots missed 9-17% of tree mortality, and here we address ‘why’ by elucidating two distinct mortality components: (1) variation in annual landscape-scale average mortality and (2) the frequency distribution of the size of clustered mortality events. Using a stochastic-empirical tree growth model we show that a power law distribution of event size (based on merged plot and satellite data) is required to generate spatial clustering of mortality that is consistent with forest gap observations. We conclude that existing plots do not sufficiently capture losses because their placement, size, and longevity assume spatially random mortality, while mortality is actually distributed among differently sized events (clusters of dead trees) that determine the spatial structure of forest canopies.

  8. Patch size and edge proximity are useful predictors of brood parasitism but not nest survival of grassland birds.

    PubMed

    Benson, Thomas J; Chiavacci, Scott J; Ward, Michael P

    2013-06-01

    Declines of migratory birds have led to increased focus on causative factors for these declines, including the potential adverse effects of habitat fragmentation on reproductive success. Although numerous studies have addressed how proximity to a habitat edge, patch size, or landscape context influence nest survival or brood parasitism, many have failed to find the purported effects. Furthermore, many have sought to generalize patterns across large geographic areas and habitats. Here, we examined evidence for effects of edge proximity, patch size, and landscape context on nest survival and brood parasitism of grassland birds, a group of conservation concern. The only consistent effect was a positive association between edge proximity and brood parasitism. We examined effects of patch size on nest survival (37 studies) and brood parasitism (30 studies) representing 170 and 97 different estimates, respectively, with a total sample size of > 14000 nests spanning eastern North America. Nest survival weakly increased with patch size in the Great Plains, but not in the Midwestern or Eastern United States, and brood parasitism was inversely related to patch size and consistently greater in the Great Plains. The consistency in brood parasitism relative to nest survival patterns is likely due to parasitism being caused by one species, while nest survival is driven by a diverse and variable suite of nest predators. Often, studies assume that predators responsible for nest predation, the main driver of nest success, either are the same or exhibit the same behaviors across large geographic areas. These results suggest that a better mechanistic understanding of nest predation is needed to provide meaningful conservation recommendations for improving grassland bird productivity, and that the use of general recommendations across large geographic areas should only be undertaken when sufficient data are available from all regions.

  9. Extent of genome-wide linkage disequilibrium in Australian Holstein-Friesian cattle based on a high-density SNP panel.

    PubMed

    Khatkar, Mehar S; Nicholas, Frank W; Collins, Andrew R; Zenger, Kyall R; Cavanagh, Julie A L; Barris, Wes; Schnabel, Robert D; Taylor, Jeremy F; Raadsma, Herman W

    2008-04-24

    The extent of linkage disequilibrium (LD) within a population determines the number of markers that will be required for successful association mapping and marker-assisted selection. Most studies on LD in cattle reported to date are based on microsatellite markers or small numbers of single nucleotide polymorphisms (SNPs) covering one or only a few chromosomes. This is the first comprehensive study on the extent of LD in cattle by analyzing data on 1,546 Holstein-Friesian bulls genotyped for 15,036 SNP markers covering all regions of all autosomes. Furthermore, most studies in cattle have used relatively small sample sizes and, consequently, may have had biased estimates of measures commonly used to describe LD. We examine minimum sample sizes required to estimate LD without bias and loss in accuracy. Finally, relatively little information is available on comparative LD structures including other mammalian species such as human and mouse, and we compare LD structure in cattle with public-domain data from both human and mouse. We computed three LD estimates, D', Dvol and r2, for 1,566,890 syntenic SNP pairs and a sample of 365,400 non-syntenic pairs. Mean D' is 0.189 among syntenic SNPs, and 0.105 among non-syntenic SNPs; mean r2 is 0.024 among syntenic SNPs and 0.0032 among non-syntenic SNPs. All three measures of LD for syntenic pairs decline with distance; the decline is much steeper for r2 than for D' and Dvol. The value of D' and Dvol are quite similar. Significant LD in cattle extends to 40 kb (when estimated as r2) and 8.2 Mb (when estimated as D'). The mean values for LD at large physical distances are close to those for non-syntenic SNPs. Minor allelic frequency threshold affects the distribution and extent of LD. For unbiased and accurate estimates of LD across marker intervals spanning < 1 kb to > 50 Mb, minimum sample sizes of 400 (for D') and 75 (for r2) are required. The bias due to small samples sizes increases with inter-marker interval. LD in cattle is much less extensive than in a mouse population created from crossing inbred lines, and more extensive than in humans. For association mapping in Holstein-Friesian cattle, for a given design, at least one SNP is required for each 40 kb, giving a total requirement of at least 75,000 SNPs for a low power whole-genome scan (median r2 > 0.19) and up to 300,000 markers at 10 kb intervals for a high power genome scan (median r2 > 0.62). For estimation of LD by D' and Dvol with sufficient precision, a sample size of at least 400 is required, whereas for r2 a minimum sample of 75 is adequate.

  10. [Application of statistics on chronic-diseases-relating observational research papers].

    PubMed

    Hong, Zhi-heng; Wang, Ping; Cao, Wei-hua

    2012-09-01

    To study the application of statistics on Chronic-diseases-relating observational research papers which were recently published in the Chinese Medical Association Magazines, with influential index above 0.5. Using a self-developed criterion, two investigators individually participated in assessing the application of statistics on Chinese Medical Association Magazines, with influential index above 0.5. Different opinions reached an agreement through discussion. A total number of 352 papers from 6 magazines, including the Chinese Journal of Epidemiology, Chinese Journal of Oncology, Chinese Journal of Preventive Medicine, Chinese Journal of Cardiology, Chinese Journal of Internal Medicine and Chinese Journal of Endocrinology and Metabolism, were reviewed. The rate of clear statement on the following contents as: research objectives, t target audience, sample issues, objective inclusion criteria and variable definitions were 99.43%, 98.57%, 95.43%, 92.86% and 96.87%. The correct rates of description on quantitative and qualitative data were 90.94% and 91.46%, respectively. The rates on correctly expressing the results, on statistical inference methods related to quantitative, qualitative data and modeling were 100%, 95.32% and 87.19%, respectively. 89.49% of the conclusions could directly response to the research objectives. However, 69.60% of the papers did not mention the exact names of the study design, statistically, that the papers were using. 11.14% of the papers were in lack of further statement on the exclusion criteria. Percentage of the papers that could clearly explain the sample size estimation only taking up as 5.16%. Only 24.21% of the papers clearly described the variable value assignment. Regarding the introduction on statistical conduction and on database methods, the rate was only 24.15%. 18.75% of the papers did not express the statistical inference methods sufficiently. A quarter of the papers did not use 'standardization' appropriately. As for the aspect of statistical inference, the rate of description on statistical testing prerequisite was only 24.12% while 9.94% papers did not even employ the statistical inferential method that should be used. The main deficiencies on the application of Statistics used in papers related to Chronic-diseases-related observational research were as follows: lack of sample-size determination, variable value assignment description not sufficient, methods on statistics were not introduced clearly or properly, lack of consideration for pre-requisition regarding the use of statistical inferences.

  11. Effect of H-wave polarization on laser radar detection of partially convex targets in random media.

    PubMed

    El-Ocla, Hosam

    2010-07-01

    A study on the performance of laser radar cross section (LRCS) of conducting targets with large sizes is investigated numerically in free space and random media. The LRCS is calculated using a boundary value method with beam wave incidence and H-wave polarization. Considered are those elements that contribute to the LRCS problem including random medium strength, target configuration, and beam width. The effect of the creeping waves, stimulated by H-polarization, on the LRCS behavior is manifested. Targets taking large sizes of up to five wavelengths are sufficiently larger than the beam width and are sufficient for considering fairly complex targets. Scatterers are assumed to have analytical partially convex contours with inflection points.

  12. Excess success for psychology articles in the journal science.

    PubMed

    Francis, Gregory; Tanzman, Jay; Matthews, William J

    2014-01-01

    This article describes a systematic analysis of the relationship between empirical data and theoretical conclusions for a set of experimental psychology articles published in the journal Science between 2005-2012. When the success rate of a set of empirical studies is much higher than would be expected relative to the experiments' reported effects and sample sizes, it suggests that null findings have been suppressed, that the experiments or analyses were inappropriate, or that the theory does not properly follow from the data. The analyses herein indicate such excess success for 83% (15 out of 18) of the articles in Science that report four or more studies and contain sufficient information for the analysis. This result suggests a systematic pattern of excess success among psychology articles in the journal Science.

  13. Excess Success for Psychology Articles in the Journal Science

    PubMed Central

    Francis, Gregory; Tanzman, Jay; Matthews, William J.

    2014-01-01

    This article describes a systematic analysis of the relationship between empirical data and theoretical conclusions for a set of experimental psychology articles published in the journal Science between 2005–2012. When the success rate of a set of empirical studies is much higher than would be expected relative to the experiments' reported effects and sample sizes, it suggests that null findings have been suppressed, that the experiments or analyses were inappropriate, or that the theory does not properly follow from the data. The analyses herein indicate such excess success for 83% (15 out of 18) of the articles in Science that report four or more studies and contain sufficient information for the analysis. This result suggests a systematic pattern of excess success among psychology articles in the journal Science. PMID:25474317

  14. Alluvial substrate mapping by automated texture segmentation of recreational-grade side scan sonar imagery

    PubMed Central

    Buscombe, Daniel; Wheaton, Joseph M.

    2018-01-01

    Side scan sonar in low-cost ‘fishfinder’ systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar. PMID:29538449

  15. Mermin-Wagner theorem, flexural modes, and degraded carrier mobility in two-dimensional crystals with broken horizontal mirror symmetry

    NASA Astrophysics Data System (ADS)

    Fischetti, Massimo V.; Vandenberghe, William G.

    2016-04-01

    We show that the electron mobility in ideal, free-standing two-dimensional "buckled" crystals with broken horizontal mirror (σh) symmetry and Dirac-like dispersion (such as silicene and germanene) is dramatically affected by scattering with the acoustic flexural modes (ZA phonons). This is caused both by the broken σh symmetry and by the diverging number of long-wavelength ZA phonons, consistent with the Mermin-Wagner theorem. Non-σh-symmetric, "gapped" 2D crystals (such as semiconducting transition-metal dichalcogenides with a tetragonal crystal structure) are affected less severely by the broken σh symmetry, but equally seriously by the large population of the acoustic flexural modes. We speculate that reasonable long-wavelength cutoffs needed to stabilize the structure (finite sample size, grain size, wrinkles, defects) or the anharmonic coupling between flexural and in-plane acoustic modes (shown to be effective in mirror-symmetric crystals, like free-standing graphene) may not be sufficient to raise the electron mobility to satisfactory values. Additional effects (such as clamping and phonon stiffening by the substrate and/or gate insulator) may be required.

  16. Effects of withdrawal rate and starter block size on crystal orientation of a single crystal Ni-based superalloy

    NASA Astrophysics Data System (ADS)

    Rezaei, M.; Kermanpur, A.; Sadeghi, F.

    2018-03-01

    Fabrication of single crystal (SC) Ni-based gas turbine blades with a minimum crystal misorientation has always been a challenge in gas turbine industry, due to its significant influence on high temperature mechanical properties. This paper reports an experimental investigation and numerical simulation of the SC solidification process of a Ni-based superalloy to study effects of withdrawal rate and starter block size on crystal orientation. The results show that the crystal misorientation of the sample with 40 mm starter block height is decreased with increasing withdrawal rate up to about 9 mm/min, beyond which the amount of misorientation is increased. It was found that the withdrawal rate, height of the starter block and temperature gradient are completely inter-dependent and indeed achieving a SC specimen with a minimum misorientation needs careful optimization of these process parameters. The height of starter block was found to have higher impact on crystal orientation compared to the withdrawal rate. A suitable withdrawal rate regime along with a sufficient starter block height was proposed to produce SC parts with the lowest misorientation.

  17. Headwater Influences on Downstream Water Quality

    PubMed Central

    Oakes, Robert M.

    2007-01-01

    We investigated the influence of riparian and whole watershed land use as a function of stream size on surface water chemistry and assessed regional variation in these relationships. Sixty-eight watersheds in four level III U.S. EPA ecoregions in eastern Kansas were selected as study sites. Riparian land cover and watershed land use were quantified for the entire watershed, and by Strahler order. Multiple regression analyses using riparian land cover classifications as independent variables explained among-site variation in water chemistry parameters, particularly total nitrogen (41%), nitrate (61%), and total phosphorus (63%) concentrations. Whole watershed land use explained slightly less variance, but riparian and whole watershed land use were so tightly correlated that it was difficult to separate their effects. Water chemistry parameters sampled in downstream reaches were most closely correlated with riparian land cover adjacent to the smallest (first-order) streams of watersheds or land use in the entire watershed, with riparian zones immediately upstream of sampling sites offering less explanatory power as stream size increased. Interestingly, headwater effects were evident even at times when these small streams were unlikely to be flowing. Relationships were similar among ecoregions, indicating that land use characteristics were most responsible for water quality variation among watersheds. These findings suggest that nonpoint pollution control strategies should consider the influence of small upland streams and protection of downstream riparian zones alone is not sufficient to protect water quality. PMID:17999108

  18. Choice-impulsivity in children and adolescents with attention-deficit/hyperactivity disorder (ADHD): A meta-analytic review.

    PubMed

    Patros, Connor H G; Alderson, R Matt; Kasper, Lisa J; Tarle, Stephanie J; Lea, Sarah E; Hudec, Kristen L

    2016-02-01

    Impulsive behavior is a core DSM-5 diagnostic feature of attention-deficit/hyperactivity disorder (ADHD) that is associated with several pejorative outcomes. Impulsivity is multidimensional, consisting of two sub-constructs: rapid-response impulsivity and reward-delay impulsivity (i.e., choice-impulsivity). While previous research has extensively examined the presence and implications of rapid-response impulsivity in children with ADHD, reviews of choice-impulsive behavior have been both sparse and relatively circumscribed. This review used meta-analytic methods to comprehensively examine between-group differences in choice-impulsivity among children and adolescents with and without ADHD. Twenty-eight tasks (from 26 studies), consisting of 4320 total children (ADHD=2360, TD=1,960), provided sufficient information to compute an overall between-group effect size for choice-impulsivity performance. Results revealed a medium-magnitude between-group effect size (g=.47), suggesting that children and adolescents with ADHD exhibited moderately increased impulsive decision-making compared to TD children and adolescents. Further, relative to the TD group, children and adolescents with ADHD exhibited similar patterns of impulsive decision-making across delay discounting and delay of gratification tasks. However, the use of single-informant diagnostic procedures relative to multiple informants yielded larger between-group effects, and a similar pattern was observed across samples that excluded females relative to samples that included females. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Assessment of Instructions on Protection Against Food Contaminated with Radiocesium in Japan in 2011.

    PubMed

    Seto, Mayumi; Uriu, Koichiro; Kawaguchi, Isao; Yokomizo, Hiroyuki

    2018-06-01

    The Japan Ministry of Health, Labour and Welfare (MHLW) has published instructions for radiological protection against food after the Fukushima Daiichi nuclear power plant accident in 2011. Following the instructions, the export and consumption of food items identified as being contaminated were restricted for a certain period. We assessed the validity of the imposed restriction periods for two representative vegetables (spinach and cabbage) grown in Fukushima Prefecture from two perspectives: effectiveness for reducing dietary dose and economic efficiency. To assess effectiveness, we estimated the restriction period required to maintain consumers' dose below the guidance dose levels. To assess economic efficiency, we estimated the restriction period that maximizes the net benefit to taxpayers. All estimated restriction periods were shorter than the actual restriction periods imposed on spinach and cabbage from Fukushima in 2011, which indicates that the food restriction effectively maintained consumers' dietary dose below the guidance dose level, but in an economically inefficient manner. We also evaluated the response of the restriction period to the sample size for each weekly food safety test and the instructions for when to remove the restriction. Stringent MHLW instructions seemed to sufficiently reduce consumers' health risk even when the sample size for the weekly food safety test was small, but tended to increase the economic cost to taxpayers. © 2017 Society for Risk Analysis.

  20. Evaluation of response variables in computer-simulated virtual cataract surgery

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Laurell, Carl-Gustaf; Simawi, Wamidh; Nordqvist, Per; Skarman, Eva; Nordh, Leif

    2006-02-01

    We have developed a virtual reality (VR) simulator for phacoemulsification (phaco) surgery. The current work aimed at evaluating the precision in the estimation of response variables identified for measurement of the performance of VR phaco surgery. We identified 31 response variables measuring; the overall procedure, the foot pedal technique, the phacoemulsification technique, erroneous manipulation, and damage to ocular structures. Totally, 8 medical or optometry students with a good knowledge of ocular anatomy and physiology but naive to cataract surgery performed three sessions each of VR Phaco surgery. For measurement, the surgical procedure was divided into a sculpting phase and an evacuation phase. The 31 response variables were measured for each phase in all three sessions. The variance components for individuals and iterations of sessions within individuals were estimated with an analysis of variance assuming a hierarchal model. The consequences of estimated variabilities for sample size requirements were determined. It was found that generally there was more variability for iterated sessions within individuals for measurements of the sculpting phase than for measurements of the evacuation phase. This resulted in larger required sample sizes for detection of difference between independent groups or change within group, for the sculpting phase as compared to for the evacuation phase. It is concluded that several of the identified response variables can be measured with sufficient precision for evaluation of VR phaco surgery.

  1. Supercritical Fluid Extraction and Analysis of Tropospheric Aerosol Particles

    NASA Astrophysics Data System (ADS)

    Hansen, Kristen J.

    An integrated sampling and supercritical fluid extraction (SFE) cell has been designed for whole-sample analysis of organic compounds on tropospheric aerosol particles. The low-volume extraction cell has been interfaced with a sampling manifold for aerosol particle collection in the field. After sample collection, the entire SFE cell was coupled to a gas chromatograph; after on-line extraction, the cryogenically -focused sample was separated and the volatile compounds detected with either a mass spectrometer or a flame ionization detector. A 20-minute extraction at 450 atm and 90 ^circC with pure supercritical CO _2 is sufficient for quantitative extraction of most volatile compounds in aerosol particle samples. A comparison between SFE and thermal desorption, the traditional whole-sample technique for analyses of this type, was performed using ambient aerosol particle samples, as well as samples containing known amounts of standard analytes. The results of these studies indicate that SFE of atmospheric aerosol particles provides quantitative measurement of several classes of organic compounds. SFE provides information that is complementary to that gained by the thermal desorption analysis. The results also indicate that SFE with CO _2 can be validated as an alternative to thermal desorption for quantitative recovery of several organic compounds. In 1989, the organic constituents of atmospheric aerosol particles collected at Niwot Ridge, Colorado, along with various physical and meteorological data, were measured during a collaborative field study. Temporal changes in the composition of samples collected during summertime at the rural site were studied. Thermal desorption-GC/FID was used to quantify selected compounds in samples collected during the field study. The statistical analysis of the 1989 Niwot Ridge data set is presented in this work. Principal component analysis was performed on thirty-one variables selected from the data set in order to ascertain different source and process components, and to examine concentration changes in groups of variables with respect to time of day and meteorological conditions. Seven orthogonal groups of variables resulted from the statistical analysis; the groups serve as molecular markers for different biologic and anthropogenic emission sources. In addition, the results of the statistical analysis were used to investigate how several emission source contributions vary with respect to local atmospheric dynamics. Field studies were conducted in the urban environment in and around Boulder, CO. to characterize the dynamics, chemistry, and emission sources which affect the composition and concentration of different size-fractions of aerosol particles in the Boulder air mass. Relationships between different size fractions of particles and some gas-phase pollutants were elucidated. These field studies included an investigation of seasonal variations in the organic content and concentration of aerosol particles, and how these characteristics are related to local meteorology and to the concentration of some gas-phase pollutants. The elemental and organic composition of aerosol particles was investigated according to particle size in preliminary studies of size-differentiated samples of aerosol particles. In order to aid in future studies of urban aerosol particles, samples were collected at a forest fire near Boulder. Molecular markers specific to wood burning processes will be useful indicators of residential wood burning activities in future field studies.

  2. Physically based method for measuring suspended-sediment concentration and grain size using multi-frequency arrays of acoustic-doppler profilers

    USGS Publications Warehouse

    Topping, David J.; Wright, Scott A.; Griffiths, Ronald; Dean, David

    2014-01-01

    As the result of a 12-year program of sediment-transport research and field testing on the Colorado River (6 stations in UT and AZ), Yampa River (2 stations in CO), Little Snake River (1 station in CO), Green River (1 station in CO and 2 stations in UT), and Rio Grande (2 stations in TX), we have developed a physically based method for measuring suspended-sediment concentration and grain size at 15-minute intervals using multifrequency arrays of acoustic-Doppler profilers. This multi-frequency method is able to achieve much higher accuracies than single-frequency acoustic methods because it allows removal of the influence of changes in grain size on acoustic backscatter. The method proceeds as follows. (1) Acoustic attenuation at each frequency is related to the concentration of silt and clay with a known grain-size distribution in a river cross section using physical samples and theory. (2) The combination of acoustic backscatter and attenuation at each frequency is uniquely related to the concentration of sand (with a known reference grain-size distribution) and the concentration of silt and clay (with a known reference grain-size distribution) in a river cross section using physical samples and theory. (3) Comparison of the suspended-sand concentrations measured at each frequency using this approach then allows theory-based calculation of the median grain size of the suspended sand and final correction of the suspended-sand concentration to compensate for the influence of changing grain size on backscatter. Although this method of measuring suspended-sediment concentration is somewhat less accurate than using conventional samplers in either the EDI or EWI methods, it is much more accurate than estimating suspended-sediment concentrations using calibrated pump measurements or single-frequency acoustics. Though the EDI and EWI methods provide the most accurate measurements of suspended-sediment concentration, these measurements are labor-intensive, expensive, and may be impossible to collect at time intervals less than discharge-independent changes in suspended-sediment concentration can occur (< hours). Therefore, our physically based multi-frequency acoustic method shows promise as a cost-effective, valid approach for calculating suspended-sediment loads in river at a level of accuracy sufficient for many scientific and management purposes.

  3. High-Grading Lunar Samples

    NASA Technical Reports Server (NTRS)

    Allen, Carlton; Sellar, Glenn; Nunez, Jorge; Mosie, Andrea; Schwarz, Carol; Parker, Terry; Winterhalter, Daniel; Farmer, Jack

    2009-01-01

    Astronauts on long-duration lunar missions will need the capability to high-grade their samples to select the highest value samples for transport to Earth and to leave others on the Moon. We are supporting studies to define the necessary and sufficient measurements and techniques for high-grading samples at a lunar outpost. A glovebox, dedicated to testing instruments and techniques for high-grading samples, is in operation at the JSC Lunar Experiment Laboratory. A reference suite of lunar rocks and soils, spanning the full compositional range found in the Apollo collection, is available for testing in this laboratory. Thin sections of these samples are available for direct comparison. The Lunar Sample Compendium, on-line at http://www-curator.jsc.nasa.gov/lunar/compendium.cfm, summarizes previous analyses of these samples. The laboratory, sample suite, and Compendium are available to the lunar research and exploration community. In the first test of possible instruments for lunar sample high-grading, we imaged 18 lunar rocks and four soils from the reference suite using the Multispectral Microscopic Imager (MMI) developed by Arizona State University and JPL (see Farmer et. al. abstract). The MMI is a fixed-focus digital imaging system with a resolution of 62.5 microns/pixel, a field size of 40 x 32 mm, and a depth-of-field of approximately 5 mm. Samples are illuminated sequentially by 21 light emitting diodes in discrete wavelengths spanning the visible to shortwave infrared. Measurements of reflectance standards and background allow calibration to absolute reflectance. ENVI-based software is used to produce spectra for specific minerals as well as multi-spectral images of rock textures.

  4. Impact of dissolution on the sedimentary record of the Paleocene-Eocene thermal maximum

    NASA Astrophysics Data System (ADS)

    Bralower, Timothy J.; Kelly, D. Clay; Gibbs, Samantha; Farley, Kenneth; Eccles, Laurie; Lindemann, T. Logan; Smith, Gregory J.

    2014-09-01

    The input of massive amounts of carbon to the atmosphere and ocean at the Paleocene-Eocene Thermal Maximum (PETM; ˜55.53 Ma) resulted in pervasive carbonate dissolution at the seafloor. At many sites this dissolution also penetrated into the underlying sediment column. The magnitude of dissolution at and below the seafloor, a process known as chemical erosion, and its effect on the stratigraphy of the PETM, are notoriously difficult to constrain. Here, we illuminate the impact of dissolution by analyzing the complete spectrum of sedimentological grain sizes across the PETM at three deep-sea sites characterized by a range of bottom water dissolution intensity. We show that the grain size spectrum provides a measure of the sediment fraction lost during dissolution. We compare these data with dissolution and other proxy records, electron micrograph observations of samples and lithology. The complete data set indicates that the two sites with slower carbonate accumulation, and less active bioturbation, are characterized by significant chemical erosion. At the third site, higher carbonate accumulation rates, more active bioturbation, and possibly winnowing have limited the impacts of dissolution. However, grain size data suggest that bioturbation and winnowing were not sufficiently intense to diminish the fidelity of isotopic and microfossil assemblage records.

  5. Method development and validation for measuring the particle size distribution of pentaerythritol tetranitrate (PETN) powders.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Sharissa Gay

    2005-09-01

    Currently, the critical particle properties of pentaerythritol tetranitrate (PETN) that influence deflagration-to-detonation time in exploding bridge wire detonators (EBW) are not known in sufficient detail to allow development of a predictive failure model. The specific surface area (SSA) of many PETN powders has been measured using both permeametry and gas absorption methods and has been found to have a critical effect on EBW detonator performance. The permeametry measure of SSA is a function of particle shape, packed bed pore geometry, and particle size distribution (PSD). Yet there is a general lack of agreement in PSD measurements between laboratories, raising concernsmore » regarding collaboration and complicating efforts to understand changes in EBW performance related to powder properties. Benchmarking of data between laboratories that routinely perform detailed PSD characterization of powder samples and the determination of the most appropriate method to measure each PETN powder are necessary to discern correlations between performance and powder properties and to collaborate with partnering laboratories. To this end, a comparison was made of the PSD measured by three laboratories using their own standard procedures for light scattering instruments. Three PETN powder samples with different surface areas and particle morphologies were characterized. Differences in bulk PSD data generated by each laboratory were found to result from variations in sonication of the samples during preparation. The effect of this sonication was found to depend on particle morphology of the PETN samples, being deleterious to some PETN samples and advantageous for others in moderation. Discrepancies in the submicron-sized particle characterization data were related to an instrument-specific artifact particular to one laboratory. The type of carrier fluid used by each laboratory to suspend the PETN particles for the light scattering measurement had no consistent effect on the resulting PSD data. Finally, the SSA of the three powders was measured using both permeametry and gas absorption methods, enabling the PSD to be linked to the SSA for these PETN powders. Consistent characterization of other PETN powders can be performed using the appropriate sample-specific preparation method, so that future studies can accurately identify the effect of changes in the PSD on the SSA and ultimately model EBW performance.« less

  6. The Clark Phase-able Sample Size Problem: Long-Range Phasing and Loss of Heterozygosity in GWAS

    NASA Astrophysics Data System (ADS)

    Halldórsson, Bjarni V.; Aguiar, Derek; Tarpine, Ryan; Istrail, Sorin

    A phase transition is taking place today. The amount of data generated by genome resequencing technologies is so large that in some cases it is now less expensive to repeat the experiment than to store the information generated by the experiment. In the next few years it is quite possible that millions of Americans will have been genotyped. The question then arises of how to make the best use of this information and jointly estimate the haplotypes of all these individuals. The premise of the paper is that long shared genomic regions (or tracts) are unlikely unless the haplotypes are identical by descent (IBD), in contrast to short shared tracts which may be identical by state (IBS). Here we estimate for populations, using the US as a model, what sample size of genotyped individuals would be necessary to have sufficiently long shared haplotype regions (tracts) that are identical by descent (IBD), at a statistically significant level. These tracts can then be used as input for a Clark-like phasing method to obtain a complete phasing solution of the sample. We estimate in this paper that for a population like the US and about 1% of the people genotyped (approximately 2 million), tracts of about 200 SNPs long are shared between pairs of individuals IBD with high probability which assures the Clark method phasing success. We show on simulated data that the algorithm will get an almost perfect solution if the number of individuals being SNP arrayed is large enough and the correctness of the algorithm grows with the number of individuals being genotyped.

  7. Genome-wide association analysis accounting for environmental factors through propensity-score matching: application to stressful live events in major depressive disorder.

    PubMed

    Power, Robert A; Cohen-Woods, Sarah; Ng, Mandy Y; Butler, Amy W; Craddock, Nick; Korszun, Ania; Jones, Lisa; Jones, Ian; Gill, Michael; Rice, John P; Maier, Wolfgang; Zobel, Astrid; Mors, Ole; Placentino, Anna; Rietschel, Marcella; Aitchison, Katherine J; Tozzi, Federica; Muglia, Pierandrea; Breen, Gerome; Farmer, Anne E; McGuffin, Peter; Lewis, Cathryn M; Uher, Rudolf

    2013-09-01

    Stressful life events are an established trigger for depression and may contribute to the heterogeneity within genome-wide association analyses. With depression cases showing an excess of exposure to stressful events compared to controls, there is difficulty in distinguishing between "true" cases and a "normal" response to a stressful environment. This potential contamination of cases, and that from genetically at risk controls that have not yet experienced environmental triggers for onset, may reduce the power of studies to detect causal variants. In the RADIANT sample of 3,690 European individuals, we used propensity score matching to pair cases and controls on exposure to stressful life events. In 805 case-control pairs matched on stressful life event, we tested the influence of 457,670 common genetic variants on the propensity to depression under comparable level of adversity with a sign test. While this analysis produced no significant findings after genome-wide correction for multiple testing, we outline a novel methodology and perspective for providing environmental context in genetic studies. We recommend contextualizing depression by incorporating environmental exposure into genome-wide analyses as a complementary approach to testing gene-environment interactions. Possible explanations for negative findings include a lack of statistical power due to small sample size and conditional effects, resulting from the low rate of adequate matching. Our findings underscore the importance of collecting information on environmental risk factors in studies of depression and other complex phenotypes, so that sufficient sample sizes are available to investigate their effect in genome-wide association analysis. Copyright © 2013 Wiley Periodicals, Inc.

  8. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  9. Dynamics of a stochastic tuberculosis model with constant recruitment and varying total population size

    NASA Astrophysics Data System (ADS)

    Liu, Qun; Jiang, Daqing; Shi, Ningzhong; Hayat, Tasawar; Alsaedi, Ahmed

    2017-03-01

    In this paper, we develop a mathematical model for a tuberculosis model with constant recruitment and varying total population size by incorporating stochastic perturbations. By constructing suitable stochastic Lyapunov functions, we establish sufficient conditions for the existence of an ergodic stationary distribution as well as extinction of the disease to the stochastic system.

  10. Method and apparatus for nitrogen oxide determination

    DOEpatents

    Hohorst, Frederick A.

    1990-01-01

    Method and apparatus for determining nitrogen oxide content in a high temperature process gas, which involves withdrawing a sample portion of a high temperature gas containing nitrogen oxide from a source to be analyzed. The sample portion is passed through a restrictive flow conduit, which may be a capillary or a restriction orifice. The restrictive flow conduit is heated to a temperature sufficient to maintain the flowing sample portion at an elevated temperature at least as great as the temperature of the high temperature gas source, to thereby provide that deposition of ammonium nitrate within the restrictive flow conduit cannot occur. The sample portion is then drawn into an aspirator device. A heated motive gas is passed to the aspirator device at a temperature at least as great as the temperature of the high temperature gas source. The motive gas is passed through the nozzle of the aspirator device under conditions sufficient to aspirate the heated sample portion through the restrictive flow conduit and produce a mixture of the sample portion in the motive gas at a dilution of the sample portion sufficient to provide that deposition of ammonium nitrate from the mixture cannot occur at reduced temperature. A portion of the cooled dilute mixture is then passed to analytical means capable of detecting nitric oxide.

  11. Analysis of antibody aggregate content at extremely high concentrations using sedimentation velocity with a novel interference optics.

    PubMed

    Schilling, Kristian; Krause, Frank

    2015-01-01

    Monoclonal antibodies represent the most important group of protein-based biopharmaceuticals. During formulation, manufacturing, or storage, antibodies may suffer post-translational modifications altering their physical and chemical properties. Such induced conformational changes may lead to the formation of aggregates, which can not only reduce their efficiency but also be immunogenic. Therefore, it is essential to monitor the amount of size variants to ensure consistency and quality of pharmaceutical antibodies. In many cases, antibodies are formulated at very high concentrations > 50 g/L, mostly along with high amounts of sugar-based excipients. As a consequence, all routine aggregation analysis methods, such as size-exclusion chromatography, cannot monitor the size distribution at those original conditions, but only after dilution and usually under completely different solvent conditions. In contrast, sedimentation velocity (SV) allows to analyze samples directly in the product formulation, both with limited sample-matrix interactions and minimal dilution. One prerequisite for the analysis of highly concentrated samples is the detection of steep concentration gradients with sufficient resolution: Commercially available ultracentrifuges are not able to resolve such steep interference profiles. With the development of our Advanced Interference Detection Array (AIDA), it has become possible to register interferograms of solutions as highly concentrated as 150 g/L. The other major difficulty encountered at high protein concentrations is the pronounced non-ideal sedimentation behavior resulting from repulsive intermolecular interactions, for which a comprehensive theoretical modelling has not yet been achieved. Here, we report the first SV analysis of highly concentrated antibodies up to 147 g/L employing the unique AIDA ultracentrifuge. By developing a consistent experimental design and data fit approach, we were able to provide a reliable estimation of the minimum content of soluble aggregates in the original formulations of two antibodies. Limitations of the procedure are discussed.

  12. Overcoming the matched-sample bottleneck: an orthogonal approach to integrate omic data.

    PubMed

    Nguyen, Tin; Diaz, Diana; Tagett, Rebecca; Draghici, Sorin

    2016-07-12

    MicroRNAs (miRNAs) are small non-coding RNA molecules whose primary function is to regulate the expression of gene products via hybridization to mRNA transcripts, resulting in suppression of translation or mRNA degradation. Although miRNAs have been implicated in complex diseases, including cancer, their impact on distinct biological pathways and phenotypes is largely unknown. Current integration approaches require sample-matched miRNA/mRNA datasets, resulting in limited applicability in practice. Since these approaches cannot integrate heterogeneous information available across independent experiments, they neither account for bias inherent in individual studies, nor do they benefit from increased sample size. Here we present a novel framework able to integrate miRNA and mRNA data (vertical data integration) available in independent studies (horizontal meta-analysis) allowing for a comprehensive analysis of the given phenotypes. To demonstrate the utility of our method, we conducted a meta-analysis of pancreatic and colorectal cancer, using 1,471 samples from 15 mRNA and 14 miRNA expression datasets. Our two-dimensional data integration approach greatly increases the power of statistical analysis and correctly identifies pathways known to be implicated in the phenotypes. The proposed framework is sufficiently general to integrate other types of data obtained from high-throughput assays.

  13. Olive Oil Tracer Particle Size Analysis for Optical Flow Investigations in a Gas Medium

    NASA Astrophysics Data System (ADS)

    Harris, Shaun; Smith, Barton

    2014-11-01

    Seed tracer particles must be large enough to scatter sufficient light while being sufficiently small to follow the flow. These requirements motivate a desire for control over the particle size. For gas measurements, it is common to use atomized oil droplets as tracer particles. A Laskin nozzle is a device for generating oil droplets in air by directing high-pressure air through small holes under an oil surface. The droplet diameter frequency distribution can be varied by altering the hole diameter, the number of holes, or the inlet pressure. We will present a systematic study of the effect of these three parameters on the resultant particle distribution as it leaves the Laskin nozzle. The study was repeated for cases where the particles moved through a typical jet facility before their size was measured. While the jet facility resulted in an elimination of larger particles, the average particle diameter could be varied by a factor of two at both the seeder exit and downstream of the jet facility.

  14. Variable temperature semiconductor film deposition

    DOEpatents

    Li, X.; Sheldon, P.

    1998-01-27

    A method of depositing a semiconductor material on a substrate is disclosed. The method sequentially comprises (a) providing the semiconductor material in a depositable state such as a vapor for deposition on the substrate; (b) depositing the semiconductor material on the substrate while heating the substrate to a first temperature sufficient to cause the semiconductor material to form a first film layer having a first grain size; (c) continually depositing the semiconductor material on the substrate while cooling the substrate to a second temperature sufficient to cause the semiconductor material to form a second film layer deposited on the first film layer and having a second grain size smaller than the first grain size; and (d) raising the substrate temperature, while either continuing or not continuing to deposit semiconductor material to form a third film layer, to thereby anneal the film layers into a single layer having favorable efficiency characteristics in photovoltaic applications. A preferred semiconductor material is cadmium telluride deposited on a glass/tin oxide substrate already having thereon a film layer of cadmium sulfide.

  15. Variable temperature semiconductor film deposition

    DOEpatents

    Li, Xiaonan; Sheldon, Peter

    1998-01-01

    A method of depositing a semiconductor material on a substrate. The method sequentially comprises (a) providing the semiconductor material in a depositable state such as a vapor for deposition on the substrate; (b) depositing the semiconductor material on the substrate while heating the substrate to a first temperature sufficient to cause the semiconductor material to form a first film layer having a first grain size; (c) continually depositing the semiconductor material on the substrate while cooling the substrate to a second temperature sufficient to cause the semiconductor material to form a second film layer deposited on the first film layer and having a second grain size smaller than the first grain size; and (d) raising the substrate temperature, while either continuing or not continuing to deposit semiconductor material to form a third film layer, to thereby anneal the film layers into a single layer having favorable efficiency characteristics in photovoltaic applications. A preferred semiconductor material is cadmium telluride deposited on a glass/tin oxide substrate already having thereon a film layer of cadmium sulfide.

  16. Using a quasi-experimental research design to assess knowledge in continuing medical education programs.

    PubMed

    Markert, Ronald J; O'Neill, Sally C; Bhatia, Subhash C

    2003-01-01

    The objectives of continuing medical education (CME) programs include knowledge acquisition, skill development, clinical reasoning and decision making, and health care outcomes. We conducted a year-long medical education research study in which knowledge acquisition in our CME programs was assessed. A randomized separate-sample pretest/past-test design, a quasi-experimental technique, was used. Nine CME programs with a sufficient number of participants were identified a priori. Knowledge acquisition was compared between the control group and the intervention group for the nine individual programs and for the combined programs. A total of 667 physicians, nurses, and other health professionals participated. Significant gain in knowledge was found for six programs: Perinatology, Pain Management, Fertility Care 2, Pediatrics, Colorectal Diseases, and Alzheimer's Disease (each p < .001). Also, the intervention group differed from the control group when the nine programs were combined (p < .001), with an effect size of .84. The use of sound quasi-experimental research methodology (separate-sample pretest/post-test design), the inclusion of a representative sample of CME programs, and the analysis of nearly 700 subjects led us to have confidence in concluding that our CME participants acquired a meaningful amount of new knowledge.

  17. A method for evaluating the fatigue crack growth in spiral notch torsion fracture toughness test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy -An John; Tan, Ting

    The spiral notch torsion test (SNTT) has been a recent breakthrough in measuring fracture toughness for different materials, including metals, ceramics, concrete, and polymers composites. Due to its high geometry constraint and unique loading condition, SNTT can be used to measure the fracture toughness with smaller specimens without concern of size effects. The application of SNTT to brittle materials has been proved to be successful. The micro-cracks induced by original notches in brittle materials could ensure crack growth in SNTT samples. Therefore, no fatigue pre-cracks are needed. The application of SNTT to the ductile material to generate valid toughness datamore » will require a test sample with sufficient crack length. Fatigue pre-crack growth techniques are employed to introduce sharp crack front into the sample. Previously, only rough calculations were applied to estimate the compliance evolution in the SNTT crack growth process, while accurate quantitative descriptions have never been attempted. This generates an urgent need to understand the crack evolution during the SNTT fracture testing process of ductile materials. Here, the newly developed governing equations for SNTT crack growth estimate are discussed in the paper.« less

  18. OBT analysis method using polyethylene beads for limited quantities of animal tissue.

    PubMed

    Kim, S B; Stuart, M

    2015-08-01

    This study presents a polyethylene beads method for OBT determination in animal tissues and animal products for cases where the amount of water recovered by combustion is limited by sample size or quantity. In the method, the amount of water recovered after combustion is enhanced by adding tritium-free polyethylene beads to the sample prior to combustion in an oxygen bomb. The method reduces process time by allowing the combustion water to be easily collected with a pipette. Sufficient water recovery was achieved using the polyethylene beads method when 2 g of dry animal tissue or animal product were combusted with 2 g of polyethylene beads. Correction factors, which account for the dilution due to the combustion water of the beads, are provided for beef, chicken, pork, fish and clams, as well as egg, milk and cheese. The method was tested by comparing its OBT results with those of the conventional method using animal samples collected on the Chalk River Laboratories (CRL) site. The results determined that the polyethylene beads method added no more than 25% uncertainty when appropriate correction factors are used. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  19. An Ensemble Recentering Kalman Filter with an Application to Argo Temperature Data Assimilation into the NASA GEOS-5 Coupled Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.

    2013-01-01

    A two-step ensemble recentering Kalman filter (ERKF) analysis scheme is introduced. The algorithm consists of a recentering step followed by an ensemble Kalman filter (EnKF) analysis step. The recentering step is formulated such as to adjust the prior distribution of an ensemble of model states so that the deviations of individual samples from the sample mean are unchanged but the original sample mean is shifted to the prior position of the most likely particle, where the likelihood of each particle is measured in terms of closeness to a chosen subset of the observations. The computational cost of the ERKF is essentially the same as that of a same size EnKF. The ERKF is applied to the assimilation of Argo temperature profiles into the OGCM component of an ensemble of NASA GEOS-5 coupled models. Unassimilated Argo salt data are used for validation. A surprisingly small number (16) of model trajectories is sufficient to significantly improve model estimates of salinity over estimates from an ensemble run without assimilation. The two-step algorithm also performs better than the EnKF although its performance is degraded in poorly observed regions.

  20. A method for evaluating the fatigue crack growth in spiral notch torsion fracture toughness test

    DOE PAGES

    Wang, Jy -An John; Tan, Ting

    2018-05-21

    The spiral notch torsion test (SNTT) has been a recent breakthrough in measuring fracture toughness for different materials, including metals, ceramics, concrete, and polymers composites. Due to its high geometry constraint and unique loading condition, SNTT can be used to measure the fracture toughness with smaller specimens without concern of size effects. The application of SNTT to brittle materials has been proved to be successful. The micro-cracks induced by original notches in brittle materials could ensure crack growth in SNTT samples. Therefore, no fatigue pre-cracks are needed. The application of SNTT to the ductile material to generate valid toughness datamore » will require a test sample with sufficient crack length. Fatigue pre-crack growth techniques are employed to introduce sharp crack front into the sample. Previously, only rough calculations were applied to estimate the compliance evolution in the SNTT crack growth process, while accurate quantitative descriptions have never been attempted. This generates an urgent need to understand the crack evolution during the SNTT fracture testing process of ductile materials. Here, the newly developed governing equations for SNTT crack growth estimate are discussed in the paper.« less

  1. Martian Chemical and Isotopic Reference Standards in Earth-based Laboratories — An Invitation for Geochemical, Astrobiological, and Engineering Dialog on Considering a Weathered Chondrite for Mars Sample Return.

    NASA Astrophysics Data System (ADS)

    Ashley, J. W.; Tait, A. W.; Velbel, M. A.; Boston, P. J.; Carrier, B. L.; Cohen, B. A.; Schröder, C.; Bland, P.

    2017-12-01

    Exogenic rocks (meteorites) found on Mars 1) have unweathered counterparts on Earth; 2) weather differently than indigenous rocks; and 3) may be ideal habitats for putative microorganisms and subsequent biosignature preservation. These attributes show the potential of meteorites for addressing hypothesis-driven science. They raise the question of whether chondritic meteorites, of sufficient weathering intensity, might be considered as candidates for sample return in a potential future mission. Pursuant to this discussion are the following questions. A) Is there anything to be learned from the laboratory study of a martian chondrite that cannot be learned from indigenous materials; and if so, B) is the science value high enough to justify recovery? If both A and B answer affirmatively, then C) what are the engineering constraints for sample collection for Mars 2020 and potential follow-on missions; and finally D) what is the likelihood of finding a favorable sample? Observations relevant to these questions include: i) Since 2005, 24 candidate and confirmed meteorites have been identified on Mars at three rover landing sites, demonstrating their ubiquity and setting expectations for future finds. All have been heavily altered by a variety of physical and chemical processes. While the majority of these are irons (not suitable for recovery), several are weathered stony meteorites. ii) Exogenic reference materials provide the only chemical/isotope standards on Mars, permitting quantification of alteration rates if residence ages can be attained; and possibly enabling the removal of Late Amazonian weathering overprints from other returned samples. iii) Recent studies have established the habitability of chondritic meteorites with terrestrial microorganisms, recommending their consideration when exploring astrobiological questions. High reactivity, organic content, and permeability show stony meteorites to be more attractive for colonization and subsequent biosignature preservation than Earth rocks. iv) Compressive strengths of most ordinary chondrites are within the range of rocks being tested for the Mars 2020 drill bits, provided that sufficient size, stability, and flatness of a target can be achieved. Alternatively, the regolith collection bit could be employed for unconsolidated material.

  2. 3D-Printing for Analytical Ultracentrifugation

    PubMed Central

    Desai, Abhiksha; Krynitsky, Jonathan; Pohida, Thomas J.; Zhao, Huaying

    2016-01-01

    Analytical ultracentrifugation (AUC) is a classical technique of physical biochemistry providing information on size, shape, and interactions of macromolecules from the analysis of their migration in centrifugal fields while free in solution. A key mechanical element in AUC is the centerpiece, a component of the sample cell assembly that is mounted between the optical windows to allow imaging and to seal the sample solution column against high vacuum while exposed to gravitational forces in excess of 300,000 g. For sedimentation velocity it needs to be precisely sector-shaped to allow unimpeded radial macromolecular migration. During the history of AUC a great variety of centerpiece designs have been developed for different types of experiments. Here, we report that centerpieces can now be readily fabricated by 3D printing at low cost, from a variety of materials, and with customized designs. The new centerpieces can exhibit sufficient mechanical stability to withstand the gravitational forces at the highest rotor speeds and be sufficiently precise for sedimentation equilibrium and sedimentation velocity experiments. Sedimentation velocity experiments with bovine serum albumin as a reference molecule in 3D printed centerpieces with standard double-sector design result in sedimentation boundaries virtually indistinguishable from those in commercial double-sector epoxy centerpieces, with sedimentation coefficients well within the range of published values. The statistical error of the measurement is slightly above that obtained with commercial epoxy, but still below 1%. Facilitated by modern open-source design and fabrication paradigms, we believe 3D printed centerpieces and AUC accessories can spawn a variety of improvements in AUC experimental design, efficiency and resource allocation. PMID:27525659

  3. Evaluating common de-identification heuristics for personal health information.

    PubMed

    El Emam, Khaled; Jabbouri, Sam; Sams, Scott; Drouet, Youenn; Power, Michael

    2006-11-21

    With the growing adoption of electronic medical records, there are increasing demands for the use of this electronic clinical data in observational research. A frequent ethics board requirement for such secondary use of personal health information in observational research is that the data be de-identified. De-identification heuristics are provided in the Health Insurance Portability and Accountability Act Privacy Rule, funding agency and professional association privacy guidelines, and common practice. The aim of the study was to evaluate whether the re-identification risks due to record linkage are sufficiently low when following common de-identification heuristics and whether the risk is stable across sample sizes and data sets. Two methods were followed to construct identification data sets. Re-identification attacks were simulated on these. For each data set we varied the sample size down to 30 individuals, and for each sample size evaluated the risk of re-identification for all combinations of quasi-identifiers. The combinations of quasi-identifiers that were low risk more than 50% of the time were considered stable. The identification data sets we were able to construct were the list of all physicians and the list of all lawyers registered in Ontario, using 1% sampling fractions. The quasi-identifiers of region, gender, and year of birth were found to be low risk more than 50% of the time across both data sets. The combination of gender and region was also found to be low risk more than 50% of the time. We were not able to create an identification data set for the whole population. Existing Canadian federal and provincial privacy laws help explain why it is difficult to create an identification data set for the whole population. That such examples of high re-identification risk exist for mainstream professions makes a strong case for not disclosing the high-risk variables and their combinations identified here. For professional subpopulations with published membership lists, many variables often needed by researchers would have to be excluded or generalized to ensure consistently low re-identification risk. Data custodians and researchers need to consider other statistical disclosure techniques for protecting privacy.

  4. Evaluating Common De-Identification Heuristics for Personal Health Information

    PubMed Central

    Jabbouri, Sam; Sams, Scott; Drouet, Youenn; Power, Michael

    2006-01-01

    Background With the growing adoption of electronic medical records, there are increasing demands for the use of this electronic clinical data in observational research. A frequent ethics board requirement for such secondary use of personal health information in observational research is that the data be de-identified. De-identification heuristics are provided in the Health Insurance Portability and Accountability Act Privacy Rule, funding agency and professional association privacy guidelines, and common practice. Objective The aim of the study was to evaluate whether the re-identification risks due to record linkage are sufficiently low when following common de-identification heuristics and whether the risk is stable across sample sizes and data sets. Methods Two methods were followed to construct identification data sets. Re-identification attacks were simulated on these. For each data set we varied the sample size down to 30 individuals, and for each sample size evaluated the risk of re-identification for all combinations of quasi-identifiers. The combinations of quasi-identifiers that were low risk more than 50% of the time were considered stable. Results The identification data sets we were able to construct were the list of all physicians and the list of all lawyers registered in Ontario, using 1% sampling fractions. The quasi-identifiers of region, gender, and year of birth were found to be low risk more than 50% of the time across both data sets. The combination of gender and region was also found to be low risk more than 50% of the time. We were not able to create an identification data set for the whole population. Conclusions Existing Canadian federal and provincial privacy laws help explain why it is difficult to create an identification data set for the whole population. That such examples of high re-identification risk exist for mainstream professions makes a strong case for not disclosing the high-risk variables and their combinations identified here. For professional subpopulations with published membership lists, many variables often needed by researchers would have to be excluded or generalized to ensure consistently low re-identification risk. Data custodians and researchers need to consider other statistical disclosure techniques for protecting privacy. PMID:17213047

  5. GOST: A generic ordinal sequential trial design for a treatment trial in an emerging pandemic.

    PubMed

    Whitehead, John; Horby, Peter

    2017-03-01

    Conducting clinical trials to assess experimental treatments for potentially pandemic infectious diseases is challenging. Since many outbreaks of infectious diseases last only six to eight weeks, there is a need for trial designs that can be implemented rapidly in the face of uncertainty. Outbreaks are sudden and unpredictable and so it is essential that as much planning as possible takes place in advance. Statistical aspects of such trial designs should be evaluated and discussed in readiness for implementation. This paper proposes a generic ordinal sequential trial design (GOST) for a randomised clinical trial comparing an experimental treatment for an emerging infectious disease with standard care. The design is intended as an off-the-shelf, ready-to-use robust and flexible option. The primary endpoint is a categorisation of patient outcome according to an ordinal scale. A sequential approach is adopted, stopping as soon as it is clear that the experimental treatment has an advantage or that sufficient advantage is unlikely to be detected. The properties of the design are evaluated using large-sample theory and verified for moderate sized samples using simulation. The trial is powered to detect a generic clinically relevant difference: namely an odds ratio of 2 for better rather than worse outcomes. Total sample sizes (across both treatments) of between 150 and 300 patients prove to be adequate in many cases, but the precise value depends on both the magnitude of the treatment advantage and the nature of the ordinal scale. An advantage of the approach is that any erroneous assumptions made at the design stage about the proportion of patients falling into each outcome category have little effect on the error probabilities of the study, although they can lead to inaccurate forecasts of sample size. It is important and feasible to pre-determine many of the statistical aspects of an efficient trial design in advance of a disease outbreak. The design can then be tailored to the specific disease under study once its nature is better understood.

  6. Does a global DNA barcoding gap exist in Annelida?

    PubMed

    Kvist, Sebastian

    2016-05-01

    Accurate identification of unknown specimens by means of DNA barcoding is contingent on the presence of a DNA barcoding gap, among other factors, as its absence may result in dubious specimen identifications - false negatives or positives. Whereas the utility of DNA barcoding would be greatly reduced in the absence of a distinct and sufficiently sized barcoding gap, the limits of intraspecific and interspecific distances are seldom thoroughly inspected across comprehensive sampling. The present study aims to illuminate this aspect of barcoding in a comprehensive manner for the animal phylum Annelida. All cytochrome c oxidase subunit I sequences (cox1 gene; the chosen region for zoological DNA barcoding) present in GenBank for Annelida, as well as for "Polychaeta", "Oligochaeta", and Hirudinea separately, were downloaded and curated for length, coverage and potential contaminations. The final datasets consisted of 9782 (Annelida), 5545 ("Polychaeta"), 3639 ("Oligochaeta"), and 598 (Hirudinea) cox1 sequences and these were either (i) used as is in an automated global barcoding gap detection analysis or (ii) further analyzed for genetic distances, separated into bins containing intraspecific and interspecific comparisons and plotted in a graph to visualize any potential global barcoding gap. Over 70 million pairwise genetic comparisons were made and results suggest that although there is a tendency towards separation, no distinct or sufficiently sized global barcoding gap exists in either of the datasets rendering future barcoding efforts at risk of erroneous specimen identifications (but local barcoding gaps may still exist allowing for the identification of specimens at lower taxonomic ranks). This seems to be especially true for earthworm taxa, which account for fully 35% of the total number of interspecific comparisons that show 0% divergence.

  7. Quantifying invertebrate resistance to floods: a global-scale meta-analysis.

    PubMed

    McMullen, Laura E; Lytle, David A

    2012-12-01

    Floods are a key component of the ecology and management of riverine ecosystems around the globe, but it is not clear whether floods have predictable effects on organisms that can allow us to generalize across regions and continents. To address this, we conducted a global-scale meta-analysis to investigate effects of natural and managed floods on invertebrate resistance, the ability of invertebrates to survive flood events. We considered 994 studies for inclusion in the analysis, and after evaluation based on a priori criteria, narrowed our analysis to 41 studies spanning six of the seven continents. We used the natural-log-ratio of invertebrate abundance before and within 10 days after flood events because this measure of effect size can be directly converted to estimates of percent survival. We conducted categorical and continuous analyses that examined the contribution of environmental and study design variables to effect size heterogeneity, and examined differences in effect size among taxonomic groups. We found that invertebrate abundance was lowered by at least one-half after flood events. While natural vs. managed floods were similar in their effect, effect size differed among habitat and substrate types, with pools, sand, and boulders experiencing the strongest effect. Although sample sizes were not sufficient to examine all taxonomic groups, floods had a significant, negative effect on densities of Coleoptera, Eumalacostraca, Annelida, Ephemeroptera, Diptera, Plecoptera, and Trichoptera. Results from this study provide guidance for river flow regime prescriptions that will be applicable across continents and climate types, as well as baseline expectations for future empirical studies of freshwater disturbance.

  8. Stability of topological defects in chiral superconductors: London theory.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vakaryuk, V.

    2011-12-22

    This paper examines the thermodynamic stability of chiral domain walls and vortices-topological defects which can exist in chiral superconductors. Using London theory it is demonstrated that at sufficiently small applied and chiral fields the existence of domain walls and vortices in the sample is not favored and the sample's configuration is a single domain. The particular chirality of the single-domain configuration is neither favored nor disfavored by the applied field. Increasing the field leads to an entry of a domain-wall loop or a vortex into the sample. The formation of a straight domain wall is never preferred in equilibrium. Valuesmore » of the entry (critical) fields for both types of defects, as well as the equilibrium size of the domain-wall loop, are calculated. We also consider a mesoscopic chiral sample and calculate its zero-field magnetization, susceptibility, and a change in the magnetic moment due to a vortex or a domain-wall entry. We show that in the case of a soft domain wall whose energetics is dominated by the chiral current (and not by the surface tension) its behavior in mesoscopic samples is substantially different from that in the bulk case and can be used for a controllable transfer of edge excitations. The applicability of these results to Sr{sub 2}RuO{sub 4} - a tentative chiral superconductor - is discussed.« less

  9. Stability of topological defects in chiral superconductors: London theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vakaryuk, Victor

    2011-12-01

    This paper examines the thermodynamic stability of chiral domain walls and vortices—topological defects which can exist in chiral superconductors. Using London theory it is demonstrated that at sufficiently small applied and chiral fields the existence of domain walls and vortices in the sample is not favored and the sample's configuration is a single domain. The particular chirality of the single-domain configuration is neither favored nor disfavored by the applied field. Increasing the field leads to an entry of a domain-wall loop or a vortex into the sample. The formation of a straight domain wall is never preferred in equilibrium. Valuesmore » of the entry (critical) fields for both types of defects, as well as the equilibrium size of the domain-wall loop, are calculated. We also consider a mesoscopic chiral sample and calculate its zero-field magnetization, susceptibility, and a change in the magnetic moment due to a vortex or a domain-wall entry. We show that in the case of a soft domain wall whose energetics is dominated by the chiral current (and not by the surface tension) its behavior in mesoscopic samples is substantially different from that in the bulk case and can be used for a controllable transfer of edge excitations. The applicability of these results to Sr 2 RuO 4 —a tentative chiral superconductor—is discussed.« less

  10. A systematic review of four injection therapies for lateral epicondylosis: prolotherapy, polidocanol, whole blood and platelet rich plasma

    PubMed Central

    Best, Thomas M.; Zgierska, Aleksandra E.; Zeisig, Eva; Ryan, Michael; Crane, David

    2009-01-01

    Objective To appraise existing evidence for prolotherapy, polidocanol, autologous whole blood and platelet-rich plasma injection therapies for lateral epicondylosis (LE) Design Systematic Review Data sources Medline, Embase, CINAHL, Cochrane Central Register of Controlled Trials, Allied and Complementary Medicine. Search strategy: names and descriptors of the therapies and LE. Study Selection All human studies assessing the four therapies for LE. Main results Results of five prospective case series and four controlled trials (3 prolotherapy, 2 polidocanol, 3 autologous whole blood and 1 platelet-rich plasma) suggest each of the four therapies is effective for LE. In follow-up periods ranging from 9 to 108 weeks, studies reported sustained, statistically significant(p<0.05) improvement on visual analog scale primary outcome pain score measures and disease specific questionnaires; relative effect sizes ranged from 51% to 94%; Cohen’s d ranged from 0.68 to 6.68. Secondary outcomes also improved, including biomechanical elbow function assessment (polidocanol and prolotherapy), presence of abnormalities and increased vascularity on ultrasound (autologous whole blood and polidocanol). Subjects reported satisfaction with therapies on single-item assessments. All studies were limited by small sample size. Conclusions There is strong pilot-level evidence supporting the use of prolotherapy, polidocanol, autologous whole blood and platelet-rich plasma injections in the treatment of LE. Rigorous studies of sufficient sample size, assessing these injection therapies using validated clinical, radiological and biomechanical measures, and tissue injury/healing-responsive biomarkers, are needed to determine long-term effectiveness and safety, and whether these techniques can play a definitive role in the management of LE and other tendinopathies. PMID:19028733

  11. Influence of coronary artery diameter on eNOS protein content

    NASA Technical Reports Server (NTRS)

    Laughlin, M. H.; Turk, J. R.; Schrage, W. G.; Woodman, C. R.; Price, E. M.

    2003-01-01

    The purpose of this study was to test the hypothesis that the content of endothelial nitric oxide synthase (eNOS) protein (eNOS protein/g total artery protein) increases with decreasing artery diameter in the coronary arterial tree. Content of eNOS protein was determined in porcine coronary arteries with immunoblot analysis. Arteries were isolated in six size categories from each heart: large arteries [301- to 2,500-microm internal diameter (ID)], small arteries (201- to 300-microm ID), resistance arteries (151- to 200-microm ID), large arterioles (101- to 150-microm ID), intermediate arterioles (51- to 100-microm ID), and small arterioles(<50-microm ID). To obtain sufficient protein for analysis from small- and intermediate-sized arterioles, five to seven arterioles 1-2 mm in length were pooled into one sample for each animal. Results establish that the number of smooth muscle cells per endothelial cell decreases from a number of 10 to 15 in large coronary arteries to 1 in the smallest arterioles. Immunohistochemistry revealed that eNOS is located only in endothelial cells in all sizes of coronary artery and in coronary capillaries. Contrary to our hypothesis, eNOS protein content did not increase with decreasing size of coronary artery. Indeed, the smallest coronary arterioles had less eNOS protein per gram of total protein than the large coronary arteries. These results indicate that eNOS protein content is greater in the endothelial cells of conduit arteries, resistance arteries, and large arterioles than in small coronary arterioles.

  12. Hot compression process for making edge seals for fuel cells

    DOEpatents

    Dunyak, Thomas J.; Granata, Jr., Samuel J.

    1994-01-01

    A hot compression process for forming integral edge seals in anode and cade assemblies wherein the assemblies are made to a nominal size larger than a finished size, beads of AFLAS are applied to a band adjacent the peripheral margins on both sides of the assemblies, the assemblies are placed in a hot press and compressed for about five minutes with a force sufficient to permeate the peripheral margins with the AFLAS, cooled and cut to finished size.

  13. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  14. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  15. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  16. 40 CFR 211.106 - Graphical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... contrast sufficiently with each other and with any information or material surrounding the label so that... label must be Helvetica Medium. (d) Character Size. All letters and numerals that appear on the...

  17. 40 CFR 211.106 - Graphical requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... contrast sufficiently with each other and with any information or material surrounding the label so that... label must be Helvetica Medium. (d) Character Size. All letters and numerals that appear on the...

  18. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  19. Quantifying the uncertainty in heritability.

    PubMed

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  20. Towards defect free EUVL reticles: carbon and particle removal by single dry cleaning process and pattern repair by HIM

    NASA Astrophysics Data System (ADS)

    Koster, N. B.; Molkenboer, F. T.; van Veldhoven, E.; Oostrom, S.

    2011-04-01

    We report on our findings on EUVL reticle contamination removal, inspection and repair. We show that carbon contamination can be removed without damage to the reticle by our plasma process. Also organic particles, simulated by PSL spheres, can be removed from both the surface of the absorber as well as from the bottom of the trenches. The particles shrink in size during the plasma treatment until they are vanished. The determination of the necessary cleaning time for PSL spheres was conducted on Ru coated samples and the final experiment was performed on our dummy reticle. Finally we show that the Helium Ion Microscope in combination with a Gas Injection System is capable of depositing additional lines and squares on the reticle with sufficient resolution for pattern repair.

  1. Who Is Doing the Housework in Multicultural Britain?

    PubMed Central

    Kan, Man-Yee; Laurie, Heather

    2016-01-01

    There is an extensive literature on the domestic division of labour within married and cohabiting couples and its relationship to gender equality within the household and the labour market. Most UK research focuses on the white majority population or is ethnicity ‘blind’, effectively ignoring potentially significant intersections between gender, ethnicity, socio-economic position and domestic labour. Quantitative empirical research on the domestic division of labour across ethnic groups has not been possible due to a lack of data that enables disaggregation by ethnic group. We address this gap using data from a nationally representative panel survey, Understanding Society, the UK Household Longitudinal Study containing sufficient sample sizes of ethnic minority groups for meaningful comparisons. We find significant variations in patterns of domestic labour by ethnic group, gender, education and employment status after controlling for individual and household characteristics. PMID:29416186

  2. Nongenetic risk factors for holoprosencephaly: An updated review of the epidemiologic literature.

    PubMed

    Summers, April D; Reefhuis, Jennita; Taliano, Joanna; Rasmussen, Sonja A

    2018-05-15

    Holoprosencephaly (HPE) is a major structural birth defect of the brain that occurs in approximately 1 in 10,000 live births. Although some genetic causes of HPE are known, a substantial proportion of cases have an unknown etiology. Due to the low birth prevalence and rarity of exposure to many potential risk factors for HPE, few epidemiologic studies have had sufficient sample size to examine risk factors. A 2010 review of the literature identified several risk factors that had been consistently identified as occurring more frequently among cases of HPE, including maternal diabetes, twinning, and a predominance of females, while also identifying a number of potential risk factors that had been less widely studied. In this article, we summarize a systematic literature review conducted to update the evidence for nongenetic risk factors for HPE. © 2018 Wiley Periodicals, Inc.

  3. Assessment of DNA extracted from FTA® cards for use on the Illumina iSelect BeadChip

    PubMed Central

    McClure, Matthew C; McKay, Stephanie D; Schnabel, Robert D; Taylor, Jeremy F

    2009-01-01

    Background As FTA® cards provide an ideal medium for the field collection of DNA we sought to assess the quality of genomic DNA extracted from this source for use on the Illumina BovineSNP50 iSelect BeadChip which requires unbound, relatively intact (fragment sizes ≥ 2 kb), and high-quality DNA. Bovine blood and nasal swab samples collected on FTA cards were extracted using the commercially available GenSolve kit with a minor modification. The call rate and concordance of genotypes from each sample were compared to those obtained from whole blood samples extracted by standard PCI extraction. Findings An ANOVA analysis indicated no significant difference (P > 0.72) in BovineSNP50 genotype call rate between DNA extracted from FTA cards by the GenSolve kit or extracted from whole blood by PCI. Two sample t-tests demonstrated that the DNA extracted from the FTA cards produced genotype call and concordance rates that were not different to those produced by assaying DNA samples extracted by PCI from whole blood. Conclusion We conclude that DNA extracted from FTA cards by the GenSolve kit is of sufficiently high quality to produce results comparable to those obtained from DNA extracted from whole blood when assayed by the Illumina iSelect technology. Additionally, we validate the use of nasal swabs as an alternative to venous blood or buccal samples from animal subjects for reliably producing high quality genotypes on this platform. PMID:19531223

  4. Assessment of DNA extracted from FTA cards for use on the Illumina iSelect BeadChip.

    PubMed

    McClure, Matthew C; McKay, Stephanie D; Schnabel, Robert D; Taylor, Jeremy F

    2009-06-16

    As FTA cards provide an ideal medium for the field collection of DNA we sought to assess the quality of genomic DNA extracted from this source for use on the Illumina BovineSNP50 iSelect BeadChip which requires unbound, relatively intact (fragment sizes >or= 2 kb), and high-quality DNA. Bovine blood and nasal swab samples collected on FTA cards were extracted using the commercially available GenSolve kit with a minor modification. The call rate and concordance of genotypes from each sample were compared to those obtained from whole blood samples extracted by standard PCI extraction. An ANOVA analysis indicated no significant difference (P > 0.72) in BovineSNP50 genotype call rate between DNA extracted from FTA cards by the GenSolve kit or extracted from whole blood by PCI. Two sample t-tests demonstrated that the DNA extracted from the FTA cards produced genotype call and concordance rates that were not different to those produced by assaying DNA samples extracted by PCI from whole blood. We conclude that DNA extracted from FTA cards by the GenSolve kit is of sufficiently high quality to produce results comparable to those obtained from DNA extracted from whole blood when assayed by the Illumina iSelect technology. Additionally, we validate the use of nasal swabs as an alternative to venous blood or buccal samples from animal subjects for reliably producing high quality genotypes on this platform.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, B.B.; Ripp, J.; Sims, R.C.

    The Electric Power Research Institute (EPRI) is studying the environmental impact of preservatives associated with in-service utility poles. As part of this endeavor, two EPRI contractors, META Environmental, Inc. (META) and Atlantic Environmental Services, Inc. (Atlantic), have collected soil samples from around wood utility poles nationwide, for various chemical and physical analyses. This report covers the results for 107 pole sites in the US. These pole sites included a range of preservative types, soil types, wood types, pole sizes, and in-service ages. The poles in this study were preserved with one of two types of preservative: pentachlorophenol (PCP) or creosote.more » Approximately 40 to 50 soil samples were collected from each wood pole site in this study. The soil samples collected from the pole sites were analyzed for chlorinated phenols and total petroleum hydrocarbons (TPH) if the pole was preserved with PCP, or for polycyclic aromatic hydrocarbons (PAHs) if the pole was preserved with creosote. The soil samples were also analyzed for physical/chemical parameters, such as pH, total organic carbon (TOC), and cationic exchange capacity (CEC). Additional samples were used in studies to determine biological degradation rates, and soil-water distribution and retardation coefficients of PCP in site soils. Methods of analysis followed standard EPA and ASTM methods, with some modifications in the chemical analyses to enable the efficient processing of many samples with sufficiently low detection limits for this study. All chemical, physical, and site-specific data were stored in a relational computer database.« less

  6. 29 CFR 1910.25 - Portable wood ladders.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... library ladders are not specifically covered by this section. (b) Materials—(1) Requirements applicable to... device of sufficient size and strength to securely hold the front and back sections in open positions...

  7. 29 CFR 1910.25 - Portable wood ladders.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... library ladders are not specifically covered by this section. (b) Materials—(1) Requirements applicable to... device of sufficient size and strength to securely hold the front and back sections in open positions...

  8. Dark-field transmission electron microscopy and the Debye-Waller factor of graphene

    PubMed Central

    Hubbard, William A.; White, E. R.; Dawson, Ben; Lodge, M. S.; Ishigami, Masa; Regan, B. C.

    2014-01-01

    Graphene's structure bears on both the material's electronic properties and fundamental questions about long range order in two-dimensional crystals. We present an analytic calculation of selected area electron diffraction from multi-layer graphene and compare it with data from samples prepared by chemical vapor deposition and mechanical exfoliation. A single layer scatters only 0.5% of the incident electrons, so this kinematical calculation can be considered reliable for five or fewer layers. Dark-field transmission electron micrographs of multi-layer graphene illustrate how knowledge of the diffraction peak intensities can be applied for rapid mapping of thickness, stacking, and grain boundaries. The diffraction peak intensities also depend on the mean-square displacement of atoms from their ideal lattice locations, which is parameterized by a Debye-Waller factor. We measure the Debye-Waller factor of a suspended monolayer of exfoliated graphene and find a result consistent with an estimate based on the Debye model. For laboratory-scale graphene samples, finite size effects are sufficient to stabilize the graphene lattice against melting, indicating that ripples in the third dimension are not necessary. PMID:25242882

  9. Steroid receptors analysis in human mammary tumors by isoelectric focusing in agarose.

    PubMed

    Bailleul, S; Gauduchon, P; Malas, J P; Lechevrel, C; Roussel, G; Goussard, J

    1988-08-01

    A high resolution and quantitative method for isoelectric focusing has been developed to separate the isoforms of estrogen and progesterone receptors in human mammary tumor cytosols stabilized by sodium molybdate. Agarose gels (0.5%) were used. Six samples can be analyzed on one gel in about 2 h, and 35-microliters samples are sufficient to determine the estrogen receptor isoform pattern. The constant yields and the reproducibility of data allow a quantitative analysis of these receptors. Four estrogen receptor isoforms have been observed (pI 4.7, 5.5, 6, and 6.5), isoforms with pI 4.7 and 6.5 being present in all tumors. After incubation at 28 degrees C in high ionic strength, the comparison of isoelectric focusing and high-performance size exclusion chromatography patterns of estrogen receptor confirms the oligomeric structure of the pI 4.7 isoform and suggests a monomeric structure for the pI 6.5 isoform. Under the same conditions of analysis, only one progesterone receptor isoform has been detected with pI 4.7.

  10. Dark-field transmission electron microscopy and the Debye-Waller factor of graphene.

    PubMed

    Shevitski, Brian; Mecklenburg, Matthew; Hubbard, William A; White, E R; Dawson, Ben; Lodge, M S; Ishigami, Masa; Regan, B C

    2013-01-15

    Graphene's structure bears on both the material's electronic properties and fundamental questions about long range order in two-dimensional crystals. We present an analytic calculation of selected area electron diffraction from multi-layer graphene and compare it with data from samples prepared by chemical vapor deposition and mechanical exfoliation. A single layer scatters only 0.5% of the incident electrons, so this kinematical calculation can be considered reliable for five or fewer layers. Dark-field transmission electron micrographs of multi-layer graphene illustrate how knowledge of the diffraction peak intensities can be applied for rapid mapping of thickness, stacking, and grain boundaries. The diffraction peak intensities also depend on the mean-square displacement of atoms from their ideal lattice locations, which is parameterized by a Debye-Waller factor. We measure the Debye-Waller factor of a suspended monolayer of exfoliated graphene and find a result consistent with an estimate based on the Debye model. For laboratory-scale graphene samples, finite size effects are sufficient to stabilize the graphene lattice against melting, indicating that ripples in the third dimension are not necessary.

  11. Multivariate Welch t-test on distances

    PubMed Central

    2016-01-01

    Motivation: Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. Results: We develop a solution in the form of a distance-based Welch t-test, TW2, for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and TW2 in reanalysis of two existing microbiome datasets, where the methodology has originated. Availability and Implementation: The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2. Further guidance on application of these methods can be obtained from the author. Contact: alekseye@musc.edu PMID:27515741

  12. Multivariate Welch t-test on distances.

    PubMed

    Alekseyenko, Alexander V

    2016-12-01

    Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. We develop a solution in the form of a distance-based Welch t-test, [Formula: see text], for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and [Formula: see text] in reanalysis of two existing microbiome datasets, where the methodology has originated. The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2 Further guidance on application of these methods can be obtained from the author. alekseye@musc.edu. © The Author 2016. Published by Oxford University Press.

  13. Monitoring Earth's Shortwave Reflectance: GEO Instrument Concept

    NASA Technical Reports Server (NTRS)

    Brageot, Emily; Mercury, Michael; Green, Robert; Mouroulis, Pantazis; Gerwe, David

    2015-01-01

    In this paper we present a GEO instrument concept dedicated to monitoring the Earth's global spectral reflectance with a high revisit rate. Based on our measurement goals, the ideal instrument needs to be highly sensitive (SNR greater than 100) and to achieve global coverage with spectral sampling (less than or equal to 10nm) and spatial sampling (less than or equal to 1km) over a large bandwidth (380-2510 nm) with a revisit time (greater than or equal to greater than or equal to 3x/day) sufficient to fully measure the spectral-radiometric-spatial evolution of clouds and confounding factor during daytime. After a brief study of existing instruments and their capabilities, we choose to use a GEO constellation of up to 6 satellites as a platform for this instrument concept in order to achieve the revisit time requirement with a single launch. We derive the main parameters of the instrument and show the above requirements can be fulfilled while retaining an instrument architecture as compact as possible by controlling the telescope aperture size and using a passively cooled detector.

  14. Comparison of rapid methods for chemical analysis of milligram samples of ultrafine clays

    USGS Publications Warehouse

    Rettig, S.L.; Marinenko, J.W.; Khoury, Hani N.; Jones, B.F.

    1983-01-01

    Two rapid methods for the decomposition and chemical analysis of clays were adapted for use with 20–40-mg size samples, typical amounts of ultrafine products (≤0.5-µm diameter) obtained by modern separation methods for clay minerals. The results of these methods were compared with those of “classical” rock analyses. The two methods consisted of mixed lithium metaborate fusion and heated decomposition with HF in a closed vessel. The latter technique was modified to include subsequent evaporation with concentrated H2SO4 and re-solution in HCl, which reduced the interference of the fluoride ion in the determination of Al, Fe, Ca, Mg, Na, and K. Results from the two methods agree sufficiently well with those of the “classical” techniques to minimize error in the calculation of clay mineral structural formulae. Representative maximum variations, in atoms per unit formula of the smectite type based on 22 negative charges, are 0.09 for Si, 0.03 for Al, 0.015 for Fe, 0.07 for Mg, 0.03 for Na, and 0.01 for K.

  15. Coal recovery process

    DOEpatents

    Good, Robert J.; Badgujar, Mohan

    1992-01-01

    A method for the beneficiation of coal by selective agglomeration and the beneficiated coal product thereof is disclosed wherein coal, comprising impurities, is comminuted to a particle size sufficient to allow impurities contained therein to disperse in water, an aqueous slurry is formed with the comminuted coal particles, treated with a compound, such as a polysaccharide and/or disaccharide, to increase the relative hydrophilicity of hydrophilic components, and thereafter the slurry is treated with sufficient liquid agglomerant to form a coagulum comprising reduced impurity coal.

  16. 13 CFR 121.1007 - Must a protest of size status relate to a particular procurement and be specific?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... contracts in excess of $5 million last year is sufficiently specific. [61 FR 3286, Jan. 31, 1996, as amended... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Must a protest of size status... following are examples of allegation specificity: Example 1: An allegation that concern X is large because...

  17. 13 CFR 121.1007 - Must a protest of size status relate to a particular procurement and be specific?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... contracts in excess of $5 million last year is sufficiently specific. [61 FR 3286, Jan. 31, 1996, as amended... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Must a protest of size status... following are examples of allegation specificity: Example 1: An allegation that concern X is large because...

  18. 13 CFR 121.1007 - Must a protest of size status relate to a particular procurement and be specific?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... contracts in excess of $5 million last year is sufficiently specific. [61 FR 3286, Jan. 31, 1996, as amended... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Must a protest of size status... following are examples of allegation specificity: Example 1: An allegation that concern X is large because...

  19. 13 CFR 121.1007 - Must a protest of size status relate to a particular procurement and be specific?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... contracts in excess of $5 million last year is sufficiently specific. [61 FR 3286, Jan. 31, 1996, as amended... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Must a protest of size status... following are examples of allegation specificity: Example 1: An allegation that concern X is large because...

  20. 13 CFR 121.1007 - Must a protest of size status relate to a particular procurement and be specific?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... contracts in excess of $5 million last year is sufficiently specific. [61 FR 3286, Jan. 31, 1996, as amended... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Must a protest of size status... following are examples of allegation specificity: Example 1: An allegation that concern X is large because...

  1. Choosing the Allometric Exponent in Covariate Model Building.

    PubMed

    Sinha, Jaydeep; Al-Sallami, Hesham S; Duffull, Stephen B

    2018-04-27

    Allometric scaling is often used to describe the covariate model linking total body weight (WT) to clearance (CL); however, there is no consensus on how to select its value. The aims of this study were to assess the influence of between-subject variability (BSV) and study design on (1) the power to correctly select the exponent from a priori choices, and (2) the power to obtain unbiased exponent estimates. The influence of WT distribution range (randomly sampled from the Third National Health and Nutrition Examination Survey, 1988-1994 [NHANES III] database), sample size (N = 10, 20, 50, 100, 200, 500, 1000 subjects), and BSV on CL (low 20%, normal 40%, high 60%) were assessed using stochastic simulation estimation. A priori exponent values used for the simulations were 0.67, 0.75, and 1, respectively. For normal to high BSV drugs, it is almost impossible to correctly select the exponent from an a priori set of exponents, i.e. 1 vs. 0.75, 1 vs. 0.67, or 0.75 vs. 0.67 in regular studies involving < 200 adult participants. On the other hand, such regular study designs are sufficient to appropriately estimate the exponent. However, regular studies with < 100 patients risk potential bias in estimating the exponent. Those study designs with limited sample size and narrow range of WT (e.g. < 100 adult participants) potentially risk either selection of a false value or yielding a biased estimate of the allometric exponent; however, such bias is only relevant in cases of extrapolating the value of CL outside the studied population, e.g. analysis of a study of adults that is used to extrapolate to children.

  2. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. General administrative rulings and decisions; amendment to the examination and investigation sample requirements; companion document to direct final rule--FDA. Proposed rule.

    PubMed

    1998-09-25

    The Food and Drug Administration (FDA) is proposing to amend its regulations regarding the collection of twice the quantity of food, drug, or cosmetic estimated to be sufficient for analysis. This action increases the dollar amount that FDA will consider to determine whether to routinely collect a reserve sample of a food, drug, or cosmetic product in addition to the quantity sufficient for analysis. Experience has demonstrated that the current dollar amount does not adequately cover the cost of most quantities sufficient for analysis plus reserve samples. This proposed rule is a companion to the direct final rule published elsewhere in this issue of the Federal Register. This action is part of FDA's continuing effort to achieve the objectives of the President's "Reinventing Government" initiative, and it is intended to reduce the burden of unnecessary regulations on food, drugs, and cosmetics without diminishing the protection of the public health.

  4. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  5. Rare high-impact disease variants: properties and identifications.

    PubMed

    Park, Leeyoung; Kim, Ju Han

    2016-03-21

    Although many genome-wide association studies have been performed, the identification of disease polymorphisms remains important. It is now suspected that many rare disease variants induce the association signal of common variants in linkage disequilibrium (LD). Based on recent development of genetic models, the current study provides explanations of the existence of rare variants with high impacts and common variants with low impacts. Disease variants are neither necessary nor sufficient due to gene-gene or gene-environment interactions. A new method was developed based on theoretical aspects to identify both rare and common disease variants by their genotypes. Common disease variants were identified with relatively small odds ratios and relatively small sample sizes, except for specific situations in which the disease variants were in strong LD with a variant with a higher frequency. Rare disease variants with small impacts were difficult to identify without increasing sample sizes; however, the method was reasonably accurate for rare disease variants with high impacts. For rare variants, dominant variants generally showed better Type II error rates than recessive variants; however, the trend was reversed for common variants. Type II error rates increased in gene regions containing more than two disease variants because the more common variant, rather than both disease variants, was usually identified. The proposed method would be useful for identifying common disease variants with small impacts and rare disease variants with large impacts when disease variants have the same effects on disease presentation.

  6. Optical design considerations when imaging the fundus with an adaptive optics correction

    NASA Astrophysics Data System (ADS)

    Wang, Weiwei; Campbell, Melanie C. W.; Kisilak, Marsha L.; Boyd, Shelley R.

    2008-06-01

    Adaptive Optics (AO) technology has been used in confocal scanning laser ophthalmoscopes (CSLO) which are analogous to confocal scanning laser microscopes (CSLM) with advantages of real-time imaging, increased image contrast, a resistance to image degradation by scattered light, and improved optical sectioning. With AO, the instrumenteye system can have low enough aberrations for the optical quality to be limited primarily by diffraction. Diffraction-limited, high resolution imaging would be beneficial in the understanding and early detection of eye diseases such as diabetic retinopathy. However, to maintain diffraction-limited imaging, sufficient pixel sampling over the field of view is required, resulting in the need for increased data acquisition rates for larger fields. Imaging over smaller fields may be a disadvantage with clinical subjects because of fixation instability and the need to examine larger areas of the retina. Reduction in field size also reduces the amount of light sampled per pixel, increasing photon noise. For these reasons, we considered an instrument design with a larger field of view. When choosing scanners to be used in an AOCSLO, the ideal frame rate should be above the flicker fusion rate for the human observer and would also allow user control of targets projected onto the retina. In our AOCSLO design, we have studied the tradeoffs between field size, frame rate and factors affecting resolution. We will outline optical approaches to overcome some of these tradeoffs and still allow detection of the earliest changes in the fundus in diabetic retinopathy.

  7. Association of Postpartum Maternal Morbidities with Children's Mental, Psychomotor and Language Development in Rural Bangladesh

    PubMed Central

    Tofail, F.; Hilaly, A.; Mehrin, F.; Shiraji, S.; Banu, S.; Huda, S.N.

    2012-01-01

    Little is known from developing countries about the effects of maternal morbidities diagnosed in the postpartum period on children's development. The study aimed to document the relationships of such morbidities with care-giving practices by mothers, children's developmental milestones and their language, mental and psychomotor development. Maternal morbidities were identified through physical examination at 6-9 weeks postpartum (n=488). Maternal care-giving practices and postnatal depression were assessed also at 6-9 weeks postpartum. Children's milestones of development were measured at six months, and their mental (MDI) and psychomotor (PDI) development, language comprehension and expression, and quality of psychosocial stimulation at home were assessed at 12 months. Several approaches were used for identifying the relationships among different maternal morbidities, diagnosed by physicians, with children's development. After controlling for the potential confounders, maternal anaemia diagnosed postpartum showed a small but significantly negative effect on children's language expression while the effects on language comprehension did not reach the significance level (p=0.085). Children's development at 12 months was related to psychosocial stimulation at home, nutritional status, education of parents, socioeconomic status, and care-giving practices of mothers at six weeks of age. Only a few mothers experienced each specific morbidity, and with the exception of anaemia, the sample-size was insufficient to make a conclusion regarding each specific morbidity. Further research with a sufficient sample-size of individual morbidities is required to determine the association of postpartum maternal morbidities with children's development. PMID:22838161

  8. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  9. Accuracy of LightCycler(R) SeptiFast for the detection and identification of pathogens in the blood of patients with suspected sepsis: a systematic review protocol.

    PubMed

    Dark, Paul; Wilson, Claire; Blackwood, Bronagh; McAuley, Danny F; Perkins, Gavin D; McMullan, Ronan; Gates, Simon; Warhurst, Geoffrey

    2012-01-01

    Background There is growing interest in the potential utility of molecular diagnostics in improving the detection of life-threatening infection (sepsis). LightCycler® SeptiFast is a multipathogen probe-based real-time PCR system targeting DNA sequences of bacteria and fungi present in blood samples within a few hours. We report here the protocol of the first systematic review of published clinical diagnostic accuracy studies of this technology when compared with blood culture in the setting of suspected sepsis. Methods/design Data sources: the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment Database (HTA), the NHS Economic Evaluation Database (NHSEED), The Cochrane Library, MEDLINE, EMBASE, ISI Web of Science, BIOSIS Previews, MEDION and the Aggressive Research Intelligence Facility Database (ARIF). diagnostic accuracy studies that compare the real-time PCR technology with standard culture results performed on a patient's blood sample during the management of sepsis. three reviewers, working independently, will determine the level of evidence, methodological quality and a standard data set relating to demographics and diagnostic accuracy metrics for each study. Statistical analysis/data synthesis: heterogeneity of studies will be investigated using a coupled forest plot of sensitivity and specificity and a scatter plot in Receiver Operator Characteristic (ROC) space. Bivariate model method will be used to estimate summary sensitivity and specificity. The authors will investigate reporting biases using funnel plots based on effective sample size and regression tests of asymmetry. Subgroup analyses are planned for adults, children and infection setting (hospital vs community) if sufficient data are uncovered. Dissemination Recommendations will be made to the Department of Health (as part of an open-access HTA report) as to whether the real-time PCR technology has sufficient clinical diagnostic accuracy potential to move forward to efficacy testing during the provision of routine clinical care. Registration PROSPERO-NIHR Prospective Register of Systematic Reviews (CRD42011001289).

  10. Development of an integrated laboratory system for the monitoring of cyanotoxins in surface and drinking waters.

    PubMed

    Triantis, Theodoros; Tsimeli, Katerina; Kaloudis, Triantafyllos; Thanassoulias, Nicholas; Lytras, Efthymios; Hiskia, Anastasia

    2010-05-01

    A system of analytical processes has been developed in order to serve as a cost-effective scheme for the monitoring of cyanobacterial toxins on a quantitative basis, in surface and drinking waters. Five cyclic peptide hepatotoxins, microcystin-LR, -RR, -YR, -LA and nodularin were chosen as the target compounds. Two different enzyme-linked immunosorbent assays (ELISA) were validated in order to serve as primary quantitative screening tools. Validation results showed that the ELISA methods are sufficiently specific and sensitive with limits of detection (LODs) around 0.1 microg/L, however, matrix effects should be considered, especially with surface water samples or bacterial mass methanolic extracts. A colorimetric protein phosphatase inhibition assay (PPIA) utilizing protein phosphatase 2A and p-nitrophenyl phosphate as substrate, was applied in microplate format in order to serve as a quantitative screening method for the detection of the toxic activity associated with cyclic peptide hepatotoxins, at concentration levels >0.2 microg/L of MC-LR equivalents. A fast HPLC/PDA method has been developed for the determination of microcystins, by using a short, 50mm C18 column, with 1.8 microm particle size. Using this method a 10-fold reduction of sample run time was achieved and sufficient separation of microcystins was accomplished in less than 3 min. Finally, the analytical system includes an LC/MS/MS method that was developed for the determination of the 5 target compounds after SPE extraction. The method achieves extremely low limits of detection (<0.02 microg/L), in both surface and drinking waters and it is used for identification and verification purposes as well as for determinations at the ppt level. An analytical protocol that includes the above methods has been designed and validated through the analysis of a number of real samples. Copyright 2009 Elsevier Ltd. All rights reserved.

  11. Evolution of efficient methods to sample lead sources, such as house dust and hand dust, in the homes of children

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Que Hee, S.S.; Peace, B.; Clark, C.S.

    Efficient sampling methods to recover lead-containing house dust and hand dust have been evolved so that sufficient lead is collected for analysis and to ensure that correlational analyses linking these two parameters to blood lead are not dependent on the efficiency of sampling. Precise collection of loose house dust from a 1-unit area (484 cmS) with a Tygon or stainless steel sampling tube connected to a portable sampling pump (1.2 to 2.5 liters/min) required repetitive sampling (three times). The Tygon tube sampling technique for loose house dust <177 m in diameter was around 72% efficient with respect to dust weightmore » and lead collection. A representative house dust contained 81% of its total weight in this fraction. A single handwipe for applied loose hand dust was not acceptably efficient or precise, and at least three wipes were necessary to achieve recoveries of >80% of the lead applied. House dusts of different particle sizes <246 m adhered equally well to hands. Analysis of lead-containing material usually required at least three digestions/decantations using hot plate or microwave techniques to allow at least 90% of the lead to be recovered. It was recommended that other investigators validate their handwiping, house dust sampling, and digestion techniques to facilitate comparison of results across studies. The final methodology for the Cincinnati longitudinal study was three sampling passes for surface dust using a stainless steel sampling tube; three microwave digestion/decantations for analysis of dust and paint; and three wipes with handwipes with one digestion/decantation for the analysis of six handwipes together.« less

  12. The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education

    ERIC Educational Resources Information Center

    Slavin, Robert; Smith, Dewi

    2009-01-01

    Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…

  13. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Antarctic glacial history from numerical models and continental margin sediments

    USGS Publications Warehouse

    Barker, P.F.; Barrett, P.J.; Cooper, A. K.; Huybrechts, P.

    1999-01-01

    The climate record of glacially transported sediments in prograded wedges around the Antarctic outer continental shelf, and their derivatives in continental rise drifts, may be combined to produce an Antarctic ice sheet history, using numerical models of ice sheet response to temperature and sea-level change. Examination of published models suggests several preliminary conclusions about ice sheet history. The ice sheet's present high sensitivity to sea-level change at short (orbital) periods was developed gradually as its size increased, replacing a declining sensitivity to temperature. Models suggest that the ice sheet grew abruptly to 40% (or possibly more) of its present size at the Eocene-Oligocene boundary, mainly as a result of its own temperature sensitivity. A large but more gradual middle Miocene change was externally driven, probably by development of the Antarctic Circumpolar Current (ACC) and Polar Front, provided that a few million years' delay can be explained. The Oligocene ice sheet varied considerably in size and areal extent, but the late Miocene ice sheet was more stable, though significantly warmer than today's. This difference probably relates to the confining effect of the Antarctic continental margin. Present-day numerical models of ice sheet development are sufficient to guide current sampling plans, but sea-ice formation, polar wander, basal topography and ice streaming can be identified as factors meriting additional modelling effort in the future.

  15. Study on a practical robotic follower to support home oxygen therapy patients--questionnaire-based concept evaluation by the patients-.

    PubMed

    Endo, Gen; Iemura, Yu; Fukushima, Edwardo F; Hirose, Shigeo; Iribe, Masatsugu; Ikeda, Ryota; Onishi, Kohei; Maeda, Naoto; Takubo, Toshio; Ohira, Mineko

    2013-06-01

    Home oxygen therapy (HOT) is a medical treatment for the patients suffering from severe lung diseases. Although walking outdoors is recommended for the patients to maintain physical strength, the patients always have to carry a portable oxygen supplier which is not sufficiently light weight for this purpose. Our ultimate goal is to develop a mobile robot to carry an oxygen tank and follow a patient in an urban outdoor environment. We have proposed a mobile robot with a tether interface to detect the relative position of the foregoing patient. In this paper, we report the questionnaire-based evaluation about the two developed prototypes by the HOT patients. We conduct maneuvering experiments, and then obtained questionnaire-based evaluations from the 20 patients. The results show that the basic following performance is sufficient and the pulling force of the tether is sufficiently small for the patients. Moreover, the patients prefer the small-sized prototype for compactness and light weight to the middle-sized prototype which can carry larger payload. We also obtained detailed requests to improve the robots. Finally the results show the general concept of the robot is favorably received by the patients.

  16. Solving the critical thermal bowing in 3C-SiC/Si(111) by a tilting Si pillar architecture

    NASA Astrophysics Data System (ADS)

    Albani, Marco; Marzegalli, Anna; Bergamaschini, Roberto; Mauceri, Marco; Crippa, Danilo; La Via, Francesco; von Känel, Hans; Miglio, Leo

    2018-05-01

    The exceptionally large thermal strain in few-micrometers-thick 3C-SiC films on Si(111), causing severe wafer bending and cracking, is demonstrated to be elastically quenched by substrate patterning in finite arrays of Si micro-pillars, sufficiently large in aspect ratio to allow for lateral pillar tilting, both by simulations and by preliminary experiments. In suspended SiC patches, the mechanical problem is addressed by finite element method: both the strain relaxation and the wafer curvature are calculated at different pillar height, array size, and film thickness. Patches as large as required by power electronic devices (500-1000 μm in size) show a remarkable residual strain in the central area, unless the pillar aspect ratio is made sufficiently large to allow peripheral pillars to accommodate the full film retraction. A sublinear relationship between the pillar aspect ratio and the patch size, guaranteeing a minimal curvature radius, as required for wafer processing and micro-crack prevention, is shown to be valid for any heteroepitaxial system.

  17. Productivity and technical efficiency of suckler beef production systems: trends for the period 1990 to 2012.

    PubMed

    Veysset, P; Lherm, M; Roulenc, M; Troquier, C; Bébin, D

    2015-12-01

    Over the past 23 years (1990 to 2012), French beef cattle farms have expanded in size and increased labour productivity by over 60%, chiefly, though not exclusively, through capital intensification (labour-capital substitution) and simplifying herd feeding practices (more concentrates used). The technical efficiency of beef sector production systems, as measured by the ratio of the volume value (in constant euros) of farm output excluding aids to volume of intermediate consumption, has fallen by nearly 20% while income per worker has held stable thanks to subsidies and the labour productivity gains made. This aggregate technical efficiency of beef cattle systems is positively correlated to feed self-sufficiency, which is in turn negatively correlated to farm and herd size. While volume of farm output per hectare of agricultural area has not changed, forage feed self-sufficiency decreased by 6 percentage points. The continual increase in farm size and labour productivity has come at a cost of lower production-system efficiency - a loss of technical efficiency that 20 years of genetic, technical, technological and knowledge-driven progress has barely managed to offset.

  18. Synergistic dynamics of nitrogen and phosphorous influences lipid productivity in Chlorella minutissima for biodiesel production.

    PubMed

    Arora, Neha; Patel, Alok; Pruthi, Parul A; Pruthi, Vikas

    2016-08-01

    The study synergistically optimized nitrogen and phosphorous concentrations for attainment of maximum lipid productivity in Chlorella minutissima. Nitrogen and phosphorous limited cells (N(L)P(L)) showed maximum lipid productivity (49.1±0.41mg/L/d), 1.47 folds higher than control. Nitrogen depletion resulted in reduced cell size with large sized lipid droplets encompassing most of the intracellular space while discrete lipid bodies were observed under nitrogen sufficiency. Synergistic N/P starvations showed more prominent effect on photosynthetic pigments as to individual deprivations. Phosphorous deficiency along with N starvation exhibited 17.12% decline in carbohydrate while no change in nitrogen sufficient cells were recorded. The optimum N(L)P(L) concentration showed balance between biomass and lipid by maintaining intermediate cell size, pigments, carbohydrate and proteins. FAME profile showed C14-C18 carbon chains in N(L)P(L) cells with biodiesel properties comparable to plant oil methyl esters. Hence, synergistic N/P limitation was effective for enhancing lipid productivity with reduced consumption of nutrients. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Accounting for twin births in sample size calculations for randomised trials.

    PubMed

    Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J

    2018-05-04

    Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.

  20. The Performance of a PN Spread Spectrum Receiver Preceded by an Adaptive Interference Suppression Filter.

    DTIC Science & Technology

    1982-12-01

    Sequence dj Estimate of the Desired Signal DEL Sampling Time Interval DS Direct Sequence c Sufficient Statistic E/T Signal Power Erfc Complimentary Error...Namely, a white Gaussian noise (WGN) generator was added. Also, a statistical subroutine was added in order to assess performance improvement at the...reference code and then passed through a correlation detector whose output is the sufficient 1 statistic , e . Using a threshold device and the sufficient

  1. DNA quality and quantity from up to 16 years old post-mortem blood stored on FTA cards.

    PubMed

    Rahikainen, Anna-Liina; Palo, Jukka U; de Leeuw, Wiljo; Budowle, Bruce; Sajantila, Antti

    2016-04-01

    Blood samples preserved on FTA cards offer unique opportunities for genetic research. DNA recovered from these cards should be stable for long periods of time. However, it is not well established as how well the DNA stored on FTA card for substantial time periods meets the demands of forensic or genomic DNA analyses and especially so for from post-mortem (PM) samples in which the quality can vary upon initial collection. The aim of this study was to evaluate the time-dependent degradation on DNA quality and quantity extracted from up to 16 years old post-mortem bloodstained FTA cards. Four random FTA samples from eight time points spanning 1998 to 2013 (n=32) were collected and extracted in triplicate. The quantity and quality of the extracted DNA samples were determined with Quantifiler(®) Human Plus (HP) Quantification kit. Internal sample and sample-to-sample variation were evaluated by comparing recovered DNA yields. The DNA from the triplicate samplings were subsequently combined and normalized for further analysis. The practical effect of degradation on DNA quality was evaluated from normalized samples both with forensic and pharmacogenetic target markers. Our results suggest that (1) a PM change, e.g. blood clotting prior to sampling, affects the recovered DNA yield, creating both internal and sample-to-sample variation; (2) a negative correlation between the FTA card storage time and DNA quantity (r=-0.836 at the 0.01 level) was observed; (3) a positive correlation (r=0.738 at the level 0.01) was found between FTA card storage time and degradation levels. However, no inhibition was observed with the method used. The effect of degradation was manifested clearly with functional applications. Although complete STR-profiles were obtained for all samples, there was evidence of degradation manifested as decreased peak heights in the larger-sized amplicons. Lower amplification success was notable with the large 5.1 kb CYP2D6 gene fragment which strongly supports degradation of the stored samples. According to our results, DNA stored on FTA cards is rather stable over a long time period. DNA extracted from this storage medium can be used as human identification purposes as the method used is sufficiently sensitive and amplicon sizes tend to be <400 bp. However, DNA integrity was affected during storage. This effect should be taken into account depending on the intended application especially if high quality DNA and long PCR amplicons are required. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Public Opinion Polls, Chicken Soup and Sample Size

    ERIC Educational Resources Information Center

    Nguyen, Phung

    2005-01-01

    Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.

  3. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  4. Impression cytology: a novel sampling technique for conjunctival cytology of the feline eye.

    PubMed

    Eördögh, Réka; Schwendenwein, Ilse; Tichy, Alexander; Nell, Barbara

    2015-07-01

    Impression cytology is a noninvasive investigation of the ocular surface. It uses the adhesive features of different filter papers to collect a monolayer of epithelial cells from the cornea and/or conjunctiva. Samples obtained by impression cytology exhibit all characteristics of an ideal cytology specimen. The aim of this study was to test the feasibility of impression cytology and determine the most appropriate filter paper to achieve maximum diagnostic value of the feline eye. Ten healthy cats. The study was conducted in two phases. In the first phase, eight different filter papers (FPs) with various pore sizes were tested: 3.0-, 1.2-, 0.8-, 0.45-, 0.22-, 0.05- and 0.025-μm cellulose acetate papers and a 0.4-μm Biopore membrane (BM). Samples were obtained from the superior bulbar and from the inferior palpebral conjunctiva. In the second phase, three different sampling methods - with and without topical anesthesia, and with topical anesthesia and drying of the conjunctiva - were compared employing the BM encased in the intended BM device (BMD). Samples were evaluated for cellularity and quality of cells. In the first phase, samples obtained from the superior bulbar conjunctiva with the BM had the most sufficient cellularity and quality. In the second phase, BMD with topical anesthesia and additional drying of the conjunctiva was the most ideal method. The BMD may prove to be a suitable diagnostic tool for clinicians. Sampling is quick, processing is simple, and a large area of intact cells can be harvested. © 2014 American College of Veterinary Ophthalmologists.

  5. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  6. 33 CFR 110.232 - Southeast Alaska.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... explosives anchorage. (5) A wooden vessel must: (i) Be fitted with a radar reflector screen of metal of sufficient size to permit target indication on the radar screen of commercial type radar; or (ii) Have steel...

  7. 33 CFR 110.232 - Southeast Alaska.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... explosives anchorage. (5) A wooden vessel must: (i) Be fitted with a radar reflector screen of metal of sufficient size to permit target indication on the radar screen of commercial type radar; or (ii) Have steel...

  8. 33 CFR 110.232 - Southeast Alaska.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... explosives anchorage. (5) A wooden vessel must: (i) Be fitted with a radar reflector screen of metal of sufficient size to permit target indication on the radar screen of commercial type radar; or (ii) Have steel...

  9. 33 CFR 110.232 - Southeast Alaska.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... explosives anchorage. (5) A wooden vessel must: (i) Be fitted with a radar reflector screen of metal of sufficient size to permit target indication on the radar screen of commercial type radar; or (ii) Have steel...

  10. 33 CFR 110.232 - Southeast Alaska.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... explosives anchorage. (5) A wooden vessel must: (i) Be fitted with a radar reflector screen of metal of sufficient size to permit target indication on the radar screen of commercial type radar; or (ii) Have steel...

  11. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  12. The application of LANDSAT-1 imagery for monitoring strip mines in the new river watershed in northeast Tennessee, part 2

    NASA Technical Reports Server (NTRS)

    Shahrokhi, F. (Principal Investigator); Sharber, L. A.

    1977-01-01

    The author has identified the following significant results. LANDSAT imagery and supplementary aircraft photography of the New River drainage basin were subjected to a multilevel analysis using conventional photointerpretation methods, densitometric techniques, multispectral analysis, and statistical tests to determine the accuracy of LANDSAT-1 imagery for measuring strip mines of common size. The LANDSAT areas were compared with low altitude measurements. The average accuracy over all the mined land sample areas mapped from LANDSAT-1 was 90%. The discrimination of strip mine subcategories is somewhat limited on LANDSAT imagery. A mine site, whether active or inactive, can be inferred by lack of vegetation, by shape, or image texture. Mine ponds are difficult or impossible to detect because of their small size and turbidity. Unless bordered and contrasted with vegetation, haulage roads are impossible to delineate. Preparation plants and refuge areas are not detectable. Density slicing of LANDSAT band 7 proved most useful in the detection of reclamation progress within the mined areas. For most state requirements for year-round monitoring of surface mined land, LANDSAT is of limited value. However, for periodic updating of regional surface maps, LANDSAT may provide sufficient accuracies for some users.

  13. The Elastic Behaviour of Sintered Metallic Fibre Networks: A Finite Element Study by Beam Theory

    PubMed Central

    Bosbach, Wolfram A.

    2015-01-01

    Background The finite element method has complimented research in the field of network mechanics in the past years in numerous studies about various materials. Numerical predictions and the planning efficiency of experimental procedures are two of the motivational aspects for these numerical studies. The widespread availability of high performance computing facilities has been the enabler for the simulation of sufficiently large systems. Objectives and Motivation In the present study, finite element models were built for sintered, metallic fibre networks and validated by previously published experimental stiffness measurements. The validated models were the basis for predictions about so far unknown properties. Materials and Methods The finite element models were built by transferring previously published skeletons of fibre networks into finite element models. Beam theory was applied as simplification method. Results and Conclusions The obtained material stiffness isn’t a constant but rather a function of variables such as sample size and boundary conditions. Beam theory offers an efficient finite element method for the simulated fibre networks. The experimental results can be approximated by the simulated systems. Two worthwhile aspects for future work will be the influence of size and shape and the mechanical interaction with matrix materials. PMID:26569603

  14. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  15. Reflectance of vegetation, soil, and water

    NASA Technical Reports Server (NTRS)

    Wiegand, C. L.; Gausman, H. W.; Leamer, R. W.; Richardson, A. J.; Gerbermann, A. H. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Iron deficient and normal grain sorghum plants were sufficiently different spectrally in ERTS-1 band 5 CCT data to detect chlorotic sorghum areas 2.8 acres (1.1 hectares) or larger in size in computer printouts of the MSS data. The ratio of band 5 to band 7 or band 7 minus band 5 relates to vegetation ground cover conditions and helps to select training samples representative of differing vegetation maturity or vigor classes and to estimate ground cover or green vegetation density in the absence of ground information. The four plant parameters; leaf area index, plant population, plant cover, and plant height explained 87 to 93% of the variability in band 6 digital counts and from 59 to 90% of the variation in bands 4 and 5. A ground area 2244 acres in size was classified on a pixel by pixel basis using simultaneously acquired aircraft support and ERTS-1 data. Overall recognition for vegetables, immature crops and mixed shrubs, and bare soil categories was 64.5% for aircraft and 59.6% for spacecraft data, respectively. Overall recognition results on a per field basis were 61.8% for aircraft and 62.8% for ERTS-1 data.

  16. Effects of perceptual body image distortion and early weight gain on long-term outcome of adolescent anorexia nervosa.

    PubMed

    Boehm, Ilka; Finke, Beatrice; Tam, Friederike I; Fittig, Eike; Scholz, Michael; Gantchev, Krassimir; Roessner, Veit; Ehrlich, Stefan

    2016-12-01

    Anorexia nervosa (AN), a severe mental disorder with an onset during adolescence, has been found to be difficult to treat. Identifying variables that predict long-term outcome may help to develop better treatment strategies. Since body image distortion and weight gain are central elements of diagnosis and treatment of AN, the current study investigated perceptual body image distortion, defined as the accuracy of evaluating one's own perceived body size in relation to the actual body size, as well as total and early weight gain during inpatient treatment as predictors for long-term outcome in a sample of 76 female adolescent AN patients. Long-term outcome was defined by physical, psychological and psychosocial adjustment using the Morgan-Russell outcome assessment schedule as well as by the mere physical outcome consisting of menses and/or BMI approximately 3 years after treatment. Perceptual body image distortion and early weight gain predicted long-term outcome (explained variance 13.3 %), but not the physical outcome alone. This study provides first evidence for an association of perceptual body image distortion with long-term outcome of adolescent anorexia nervosa and underlines the importance of sufficient early weight gain.

  17. Ly α and UV Sizes of Green Pea Galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Huan; Wang, Junxian; Malhotra, Sangeeta

    Green Peas are nearby analogs of high-redshift Ly α -emitting galaxies (LAEs). To probe their Ly α escape, we study the spatial profiles of Ly α and UV continuum emission of 24 Green Pea galaxies using the Cosmic Origins Spectrograph (COS) on the Hubble Space Telescope . We extract the spatial profiles of Ly α emission from their 2D COS spectra, and of the UV continuum from both 2D spectra and NUV images. The Ly α emission shows more extended spatial profiles than the UV continuum, in most Green Peas. The deconvolved full width at half maximum of the Lymore » α spatial profile is about 2–4 times that of the UV continuum, in most cases. Because Green Peas are analogs of high z LAEs, our results suggest that most high- z LAEs probably have larger Ly α sizes than UV sizes. We also compare the spatial profiles of Ly α photons at blueshifted and redshifted velocities in eight Green Peas with sufficient data quality, and find that the blue wing of the Ly α line has a larger spatial extent than the red wing in four Green Peas with comparatively weak blue Ly α line wings. We show that Green Peas and MUSE z = 3–6 LAEs have similar Ly α and UV continuum sizes, which probably suggests that starbursts in both low- z and high- z LAEs drive similar gas outflows illuminated by Ly α light. Five Lyman continuum (LyC) leakers in this sample have similar Ly α to UV continuum size ratios (∼1.4–4.3) to the other Green Peas, indicating that their LyC emissions escape through ionized holes in the interstellar medium.« less

  18. Total-body creatine pool size and skeletal muscle mass determination by creatine-(methyl-D3) dilution in rats.

    PubMed

    Stimpson, Stephen A; Turner, Scott M; Clifton, Lisa G; Poole, James C; Mohammed, Hussein A; Shearer, Todd W; Waitt, Greg M; Hagerty, Laura L; Remlinger, Katja S; Hellerstein, Marc K; Evans, William J

    2012-06-01

    There is currently no direct, facile method to determine total-body skeletal muscle mass for the diagnosis and treatment of skeletal muscle wasting conditions such as sarcopenia, cachexia, and disuse. We tested in rats the hypothesis that the enrichment of creatinine-(methyl-d(3)) (D(3)-creatinine) in urine after a defined oral tracer dose of D(3)-creatine can be used to determine creatine pool size and skeletal muscle mass. We determined 1) an oral tracer dose of D(3)-creatine that was completely bioavailable with minimal urinary spillage and sufficient enrichment in the body creatine pool for detection of D(3)-creatine in muscle and D(3)-creatinine in urine, and 2) the time to isotopic steady state. We used cross-sectional studies to compare total creatine pool size determined by the D(3)-creatine dilution method to lean body mass determined by independent methods. The tracer dose of D(3)-creatine (<1 mg/rat) was >99% bioavailable with 0.2-1.2% urinary spillage. Isotopic steady state was achieved within 24-48 h. Creatine pool size calculated from urinary D(3)-creatinine enrichment at 72 h significantly increased with muscle accrual in rat growth, significantly decreased with dexamethasone-induced skeletal muscle atrophy, was correlated with lean body mass (r = 0.9590; P < 0.0001), and corresponded to predicted total muscle mass. Total-body creatine pool size and skeletal muscle mass can thus be accurately and precisely determined by an orally delivered dose of D(3)-creatine followed by the measurement of D(3)-creatinine enrichment in a single urine sample and is promising as a noninvasive tool for the clinical determination of skeletal muscle mass.

  19. Local health department epidemiologic capacity: a stratified cross-sectional assessment describing the quantity, education, training, and perceived competencies of epidemiologic staff.

    PubMed

    O'Keefe, Kaitlin A; Shafir, Shira C; Shoaf, Kimberley I

    2013-01-01

    Local health departments (LHDs) must have sufficient numbers of staff functioning in an epidemiologic role with proper education, training, and skills to protect the health of communities they serve. This pilot study was designed to describe the composition, training, and competency level of LHD staff and examine the hypothesis that potential disparities exist between LHDs serving different sized populations. Cross-sectional surveys were conducted with directors and epidemiologic staff from a sample of 100 LHDs serving jurisdictions of varied sizes. Questionnaires included inquiries regarding staff composition, education, training, and measures of competency modeled on previously conducted studies by the Council of State and Territorial Epidemiologists. Number of epidemiologic staff, academic degree distribution, epidemiologic training, and both director and staff confidence in task competencies were calculated for each LHD size strata. Disparities in measurements were observed in LHDs serving different sized populations. LHDs serving small populations reported a smaller average number of epidemiologic staff than those serving larger jurisdictions. As size of population served increased, percentages of staff and directors holding bachelors' and masters' degrees increased, while those holding RN degrees decreased. A higher degree of perceived competency of staff in most task categories was reported in LHDs serving larger populations. LHDs serving smaller populations reported fewer epidemiologic staff, therefore might benefit from additional resources. Differences observed in staff education, training, and competencies suggest that enhanced epidemiologic training might be particularly needed in LHDs serving smaller populations. RESULTS can be used as a baseline for future research aimed at identifying areas where training and personnel resources might be particularly needed to increase the capabilities of LHDs.

  20. Population Size and the Rate of Language Evolution: A Test Across Indo-European, Austronesian, and Bantu Languages

    PubMed Central

    Greenhill, Simon J.; Hua, Xia; Welsh, Caela F.; Schneemann, Hilde; Bromham, Lindell

    2018-01-01

    What role does speaker population size play in shaping rates of language evolution? There has been little consensus on the expected relationship between rates and patterns of language change and speaker population size, with some predicting faster rates of change in smaller populations, and others expecting greater change in larger populations. The growth of comparative databases has allowed population size effects to be investigated across a wide range of language groups, with mixed results. One recent study of a group of Polynesian languages revealed greater rates of word gain in larger populations and greater rates of word loss in smaller populations. However, that test was restricted to 20 closely related languages from small Oceanic islands. Here, we test if this pattern is a general feature of language evolution across a larger and more diverse sample of languages from both continental and island populations. We analyzed comparative language data for 153 pairs of closely-related sister languages from three of the world's largest language families: Austronesian, Indo-European, and Niger-Congo. We find some evidence that rates of word loss are significantly greater in smaller languages for the Indo-European comparisons, but we find no significant patterns in the other two language families. These results suggest either that the influence of population size on rates and patterns of language evolution is not universal, or that it is sufficiently weak that it may be overwhelmed by other influences in some cases. Further investigation, for a greater number of language comparisons and a wider range of language features, may determine which of these explanations holds true. PMID:29755387

  1. Lyα and UV Sizes of Green Pea Galaxies

    NASA Astrophysics Data System (ADS)

    Yang, Huan; Malhotra, Sangeeta; Rhoads, James E.; Leitherer, Claus; Wofford, Aida; Jiang, Tianxing; Wang, Junxian

    2017-03-01

    Green Peas are nearby analogs of high-redshift Lyα-emitting galaxies (LAEs). To probe their Lyα escape, we study the spatial profiles of Lyα and UV continuum emission of 24 Green Pea galaxies using the Cosmic Origins Spectrograph (COS) on the Hubble Space Telescope. We extract the spatial profiles of Lyα emission from their 2D COS spectra, and of the UV continuum from both 2D spectra and NUV images. The Lyα emission shows more extended spatial profiles than the UV continuum, in most Green Peas. The deconvolved full width at half maximum of the Lyα spatial profile is about 2-4 times that of the UV continuum, in most cases. Because Green Peas are analogs of high z LAEs, our results suggest that most high-z LAEs probably have larger Lyα sizes than UV sizes. We also compare the spatial profiles of Lyα photons at blueshifted and redshifted velocities in eight Green Peas with sufficient data quality, and find that the blue wing of the Lyα line has a larger spatial extent than the red wing in four Green Peas with comparatively weak blue Lyα line wings. We show that Green Peas and MUSE z = 3-6 LAEs have similar Lyα and UV continuum sizes, which probably suggests that starbursts in both low-z and high-z LAEs drive similar gas outflows illuminated by Lyα light. Five Lyman continuum (LyC) leakers in this sample have similar Lyα to UV continuum size ratios (˜1.4-4.3) to the other Green Peas, indicating that their LyC emissions escape through ionized holes in the interstellar medium.

  2. Pore formation during dehydration of a polycrystalline gypsum sample observed and quantified in a time-series synchrotron X-ray micro-tomography experiment

    NASA Astrophysics Data System (ADS)

    Fusseis, F.; Schrank, C.; Liu, J.; Karrech, A.; Llana-Fúnez, S.; Xiao, X.; Regenauer-Lieb, K.

    2012-03-01

    We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.

  3. Development of a small-sized generator of ozonated water using an electro-conductive diamond electrode.

    PubMed

    Sekido, Kota; Kitaori, Noriyuki

    2008-12-01

    A small-sized generator of ozonated water was developed using an electro-conductive diamond. We studied the optimum conditions for producing ozonated water. As a result, we developed a small-sized generator of ozonated water driven by a dry-cell for use in the average household. This generator was easily able to produce ozonated water with an ozone concentration (over 4 mg/L) sufficient for disinfection. In addition, we verified the high disinfecting performance of the water produced in an actual hospital.

  4. Preparation of a Co-doped hierarchically porous carbon from Co/Zn-ZIF: An efficient adsorbent for the extraction of trizine herbicides from environment water and white gourd samples.

    PubMed

    Jiao, Caina; Li, Menghua; Ma, Ruiyang; Wang, Chun; Wu, Qiuhua; Wang, Zhi

    2016-05-15

    A Co-doped hierarchically porous carbon (Co/HPC) was synthesized through a facile carbonization process by using Co/ZIF-8 as the precursor. The textures of the Co/HPC were investigated by scanning electron microscopy, transmission electron microscopy, X-ray diffraction, vibration sample magnetometry and nitrogen adsorption-desorption isotherms. The results showed that the Co/HPC is in good polyhedral shape with uniform size, sufficient magnetism, high surface area as well as hierarchical pores (micro-, meso- and macropores). To evaluate the extraction performance of the Co/HPC, it was applied as a magnetic adsorbent for the enrichment of triazine herbicides from environment water and white gourd samples prior to high performance liquid chromatographic analysis. The main parameters that affected the extraction efficiency were investigated. Under the optimum conditions, a good linearity for the four triazine herbicides was achieved with the correlation coefficients (r) higher than 0.9970. The limits of detection, based on S/N=3, were 0.02 ng/mL for water and 0.1-0.2 ng/g for white gourd samples, respectively. The recoveries of all the analytes for the method fell in the range from 80.3% to 120.6%. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. VizieR Online Data Catalog: Lyα profile in 43 Green Pea galaxies (Yang+, 2017)

    NASA Astrophysics Data System (ADS)

    Yang, H.; Malhotra, S.; Gronke, M.; Rhoads, J. E.; Leitherer, C.; Wofford, A.; Jiang, T.; Dijkstra, M.; Tilvi, V.; Wang, J.

    2018-03-01

    In SDSS DR7, a sample of 251 Green Peas was observed as serendipitous spectroscopic targets (Cardamone+ 2009MNRAS.399.1191C). A subset of 66 Green Peas have sufficient signal-to-noise ratio (S/N) in both continuum and emission lines (Hα, Hβ, and [OIII]λ5007) to study galactic properties. In Paper I (Yang+ 2016ApJ...820..130Y), we matched these 66 Green Peas with the COS archive and studied Lyα escape in a sample of 12 Green Peas with COS UV spectra. To address the bias and expand the sample size, we took the Lyα spectra of 20 additional Green Peas (PI S. Malhotra, GO 14201). We also supplement this sample with 11 additional Green Peas from published literature. In total, we have 43 Green Peas from six HST programs -- 20 galaxies from GO 14201 (PI S. Malhotra), 9 galaxies from GO 12928 (PI A. Henry; Henry+ 2015ApJ...809...19H), 7 galaxies from GO 11727 and GO 13017 (PI T. Heckman; Heckman+ 2011ApJ...730....5H ; Alexandroff+ 2015ApJ...810..104A), 2 galaxies from GO 13293 (PI A. Jaskot; Jaskot & Oey 2014ApJ...791L..19J), and 5 galaxies from GO 13744 (PI T. Thuan; Izotov+ 2016MNRAS.461.3683I). (4 data files).

  6. Density and population estimate of gibbons (Hylobates albibarbis) in the Sabangau catchment, Central Kalimantan, Indonesia.

    PubMed

    Cheyne, Susan M; Thompson, Claire J H; Phillips, Abigail C; Hill, Robyn M C; Limin, Suwido H

    2008-01-01

    We demonstrate that although auditory sampling is a useful tool, this method alone will not provide a truly accurate indication of population size, density and distribution of gibbons in an area. If auditory sampling alone is employed, we show that data collection must take place over a sufficient period to account for variation in calling patterns across seasons. The population of Hylobates albibarbis in the Sabangau catchment, Central Kalimantan, Indonesia, was surveyed from July to December 2005 using methods established previously. In addition, auditory sampling was complemented by detailed behavioural data on six habituated groups within the study area. Here we compare results from this study to those of a 1-month study conducted in 2004. The total population of the Sabangau catchment is estimated to be about in the tens of thousands, though numbers, distribution and density for the different forest subtypes vary considerably. We propose that future density surveys of gibbons must include data from all forest subtypes where gibbons are found and that extrapolating from one forest subtype is likely to yield inaccurate density and population estimates. We also propose that auditory census be carried out by using at least three listening posts (LP) in order to increase the area sampled and the chances of hearing groups. Our results suggest that the Sabangau catchment contains one of the largest remaining contiguous populations of Bornean agile gibbon.

  7. Working toward Self-Sufficiency.

    ERIC Educational Resources Information Center

    Caplan, Nathan

    1985-01-01

    Upon arrival in the United States, the Southeast Asian "Boat People" faced a multitude of problems that would seem to have hindered their achieving economic self-sufficiency. Nonetheless, by the time of a 1982 research study which interviewed nearly 1,400 refugee households, 25 percent of all the households in the sample had achieved…

  8. An Analysis on the Detection of Biological Contaminants Aboard Aircraft

    PubMed Central

    Hwang, Grace M.; DiCarlo, Anthony A.; Lin, Gene C.

    2011-01-01

    The spread of infectious disease via commercial airliner travel is a significant and realistic threat. To shed some light on the feasibility of detecting airborne pathogens, a sensor integration study has been conducted and computational investigations of contaminant transport in an aircraft cabin have been performed. Our study took into consideration sensor sensitivity as well as the time-to-answer, size, weight and the power of best available commercial off-the-shelf (COTS) devices. We conducted computational fluid dynamics simulations to investigate three types of scenarios: (1) nominal breathing (up to 20 breaths per minute) and coughing (20 times per hour); (2) nominal breathing and sneezing (4 times per hour); and (3) nominal breathing only. Each scenario was implemented with one or seven infectious passengers expelling air and sneezes or coughs at the stated frequencies. Scenario 2 was implemented with two additional cases in which one infectious passenger expelled 20 and 50 sneezes per hour, respectively. All computations were based on 90 minutes of sampling using specifications from a COTS aerosol collector and biosensor. Only biosensors that could provide an answer in under 20 minutes without any manual preparation steps were included. The principal finding was that the steady-state bacteria concentrations in aircraft would be high enough to be detected in the case where seven infectious passengers are exhaling under scenarios 1 and 2 and where one infectious passenger is actively exhaling in scenario 2. Breathing alone failed to generate sufficient bacterial particles for detection, and none of the scenarios generated sufficient viral particles for detection to be feasible. These results suggest that more sensitive sensors than the COTS devices currently available and/or sampling of individual passengers would be needed for the detection of bacteria and viruses in aircraft. PMID:21264266

  9. Reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy and American Society for Reproductive Medicine classification systems for congenital uterine anomalies detected using three-dimensional ultrasonography.

    PubMed

    Ludwin, Artur; Ludwin, Inga; Kudla, Marek; Kottner, Jan

    2015-09-01

    To estimate the inter-rater/intrarater reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy (ESHRE-ESGE) classification of congenital uterine malformations and to compare the results obtained with the reliability of the American Society for Reproductive Medicine (ASRM) classification supplemented with additional morphometric criteria. Reliability/agreement study. Private clinic. Uterine malformations (n = 50 patients, consecutively included) and normal uterus (n = 62 women, randomly selected) constituted the study. These were classified based on real-time three-dimensional ultrasound single volume transvaginal (or transrectal in the case of virgins, 4 cases) ultrasonography findings, which were assessed by an expert rater based on the ESHRE-ESGE criteria. The samples were obtained from women of reproductive age. Unprocessed three-dimensional datasets were independently evaluated offline by two experienced, blinded raters using both classification systems. The κ-values and proportions of agreement. Standardized interpretation indicated that the ESHRE-ESGE system has substantial/good or almost perfect/very good reliability (κ >0.60 and >0.80), but the interpretation of the clinically relevant cutoffs of κ-values showed insufficient reliability for clinical use (κ < 0.90), especially in the diagnosis of septate uterus. The ASRM system had sufficient reliability (κ > 0.95). The low reliability of the ESHRE-ESGE system may lead to a lack of consensus about the management of common uterine malformations and biased research interpretations. The use of the ASRM classification, supplemented with simple morphometric criteria, may be preferred if their sufficient reliability can be confirmed real-time in a large sample size. Copyright © 2015 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  10. Building test data from real outbreaks for evaluating detection algorithms.

    PubMed

    Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.

  11. Building test data from real outbreaks for evaluating detection algorithms

    PubMed Central

    Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159

  12. 7 CFR 58.244 - Number of samples.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Number of samples. 58.244 Section 58.244 Agriculture... Procedures § 58.244 Number of samples. As many samples shall be taken from each dryer production lot as is necessary to assure proper composition and quality control. A sufficient number of representative samples...

  13. 7 CFR 58.244 - Number of samples.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Number of samples. 58.244 Section 58.244 Agriculture... Procedures § 58.244 Number of samples. As many samples shall be taken from each dryer production lot as is necessary to assure proper composition and quality control. A sufficient number of representative samples...

  14. A passive guard for low thermal conductivity measurement of small samples by the hot plate method

    NASA Astrophysics Data System (ADS)

    Jannot, Yves; Degiovanni, Alain; Grigorova-Moutiers, Veneta; Godefroy, Justine

    2017-01-01

    Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6  ×  0.6 m2). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a, enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015-0.2 W m-1 K-1), but only on T a. The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045  ×  0.045 m2.

  15. Size fractionation as a tool for separating charcoal of different fuel source and recalcitrance in the wildfire ash layer.

    PubMed

    Mastrolonardo, Giovanni; Hudspith, Victoria A; Francioso, Ornella; Rumpel, Cornelia; Montecchio, Daniela; Doerr, Stefan H; Certini, Giacomo

    2017-10-01

    Charcoal is a heterogeneous material exhibiting a diverse range of properties. This variability represents a serious challenge in studies that use the properties of natural charcoal for reconstructing wildfires history in terrestrial ecosystems. In this study, we tested the hypothesis that particle size is a sufficiently robust indicator for separating forest wildfire combustion products into fractions with distinct properties. For this purpose, we examined two different forest environments affected by contrasting wildfires in terms of severity: an eucalypt forest in Australia, which experienced an extremely severe wildfire, and a Mediterranean pine forest in Italy, which burned to moderate severity. We fractionated the ash/charcoal layers collected on the ground into four size fractions (>2, 2-1, 1-0.5, <0.5mm) and analysed them for mineral ash content, elemental composition, chemical structure (by IR spectroscopy), fuel source and charcoal reflectance (by reflected-light microscopy), and chemical/thermal recalcitrance (by chemical and thermal oxidation). At both sites, the finest fraction (<0.5mm) had, by far, the greatest mass. The C concentration and C/N ratio decreased with decreasing size fraction, while pH and the mineral ash content followed the opposite trend. The coarser fractions showed higher contribution of amorphous carbon and stronger recalcitrance. We also observed that certain fuel types were preferentially represented by particular size fractions. We conclude that the differences between ash/charcoal size fractions were most likely primarily imposed by fuel source and secondarily by burning conditions. Size fractionation can therefore serve as a valuable tool to characterise the forest wildfire combustion products, as each fraction displays a narrower range of properties than the whole sample. We propose the mineral ash content of the fractions as criterion for selecting the appropriate number of fractions to analyse. Copyright © 2016. Published by Elsevier B.V.

  16. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  17. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  18. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  19. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alstone, Peter; Jacobson, Arne; Mills, Evan

    Efforts to promote rechargeable electric lighting as a replacement for fuel-based light sources in developing countries are typically predicated on the notion that lighting service levels can be maintained or improved while reducing the costs and environmental impacts of existing practices. However, the extremely low incomes of those who depend on fuel-based lighting create a need to balance the hypothetically possible or desirable levels of light with those that are sufficient and affordable. In a pilot study of four night vendors in Kenya, we document a field technique we developed to simultaneously measure the effectiveness of lighting service provided bymore » a lighting system and conduct a survey of lighting service demand by end-users. We took gridded illuminance measurements across each vendor's working and selling area, with users indicating the sufficiency of light at each point. User light sources included a mix of kerosene-fueled hurricane lanterns, pressure lamps, and LED lanterns.We observed illuminance levels ranging from just above zero to 150 lux. The LED systems markedly improved the lighting service levels over those provided by kerosene-fueled hurricane lanterns. Users reported that the minimum acceptable threshold was about 2 lux. The results also indicated that the LED lamps in use by the subjects did not always provide sufficient illumination over the desired retail areas. Our sample size is much too small, however, to reach any conclusions about requirements in the broader population. Given the small number of subjects and very specific type of user, our results should be regarded as indicative rather than conclusive. We recommend replicating the method at larger scales and across a variety of user types and contexts. Policymakers should revisit the subject of recommended illuminance levels regularly as LED technology advances and the price/service balance point evolves.« less

  1. Imaging and computational considerations for image computed permeability: Operating envelope of Digital Rock Physics

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hows, Amie; Hofmann, Ronny; Alpak, Faruk O.; Freeman, Justin; Hunter, Sander; Appel, Matthias

    2018-06-01

    This study defines the optimal operating envelope of the Digital Rock technology from the perspective of imaging and numerical simulations of transport properties. Imaging larger volumes of rocks for Digital Rock Physics (DRP) analysis improves the chances of achieving a Representative Elementary Volume (REV) at which flow-based simulations (1) do not vary with change in rock volume, and (2) is insensitive to the choice of boundary conditions. However, this often comes at the expense of image resolution. This trade-off exists due to the finiteness of current state-of-the-art imaging detectors. Imaging and analyzing digital rocks that sample the REV and still sufficiently resolve pore throats is critical to ensure simulation quality and robustness of rock property trends for further analysis. We find that at least 10 voxels are needed to sufficiently resolve pore throats for single phase fluid flow simulations. If this condition is not met, additional analyses and corrections may allow for meaningful comparisons between simulation results and laboratory measurements of permeability, but some cases may fall outside the current technical feasibility of DRP. On the other hand, we find that the ratio of field of view and effective grain size provides a reliable measure of the REV for siliciclastic rocks. If this ratio is greater than 5, the coefficient of variation for single-phase permeability simulations drops below 15%. These imaging considerations are crucial when comparing digitally computed rock flow properties with those measured in the laboratory. We find that the current imaging methods are sufficient to achieve both REV (with respect to numerical boundary conditions) and required image resolution to perform digital core analysis for coarse to fine-grained sandstones.

  2. Quantifying the utilization of medical devices necessary to detect postmarket safety differences: A case study of implantable cardioverter defibrillators.

    PubMed

    Bates, Jonathan; Parzynski, Craig S; Dhruva, Sanket S; Coppi, Andreas; Kuntz, Richard; Li, Shu-Xia; Marinac-Dabic, Danica; Masoudi, Frederick A; Shaw, Richard E; Warner, Frederick; Krumholz, Harlan M; Ross, Joseph S

    2018-06-12

    To estimate medical device utilization needed to detect safety differences among implantable cardioverter defibrillators (ICDs) generator models and compare these estimates to utilization in practice. We conducted repeated sample size estimates to calculate the medical device utilization needed, systematically varying device-specific safety event rate ratios and significance levels while maintaining 80% power, testing 3 average adverse event rates (3.9, 6.1, and 12.6 events per 100 person-years) estimated from the American College of Cardiology's 2006 to 2010 National Cardiovascular Data Registry of ICDs. We then compared with actual medical device utilization. At significance level 0.05 and 80% power, 34% or fewer ICD models accrued sufficient utilization in practice to detect safety differences for rate ratios <1.15 and an average event rate of 12.6 events per 100 person-years. For average event rates of 3.9 and 12.6 events per 100 person-years, 30% and 50% of ICD models, respectively, accrued sufficient utilization for a rate ratio of 1.25, whereas 52% and 67% for a rate ratio of 1.50. Because actual ICD utilization was not uniformly distributed across ICD models, the proportion of individuals receiving any ICD that accrued sufficient utilization in practice was 0% to 21%, 32% to 70%, and 67% to 84% for rate ratios of 1.05, 1.15, and 1.25, respectively, for the range of 3 average adverse event rates. Small safety differences among ICD generator models are unlikely to be detected through routine surveillance given current ICD utilization in practice, but large safety differences can be detected for most patients at anticipated average adverse event rates. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Randomized Controlled Trial of the ShangRing for Adult Medical Male Circumcision: Safety, Effectiveness, and Acceptability of Using 7 Versus 14 Device Sizes.

    PubMed

    Feldblum, Paul J; Zulu, Robert; Linyama, David; Long, Sarah; Nonde, Thikazi Jere; Lai, Jaim Jou; Kashitala, Joshua; Veena, Valentine; Kasonde, Prisca

    2016-06-01

    To assess the safety, effectiveness, and acceptability of providing a reduced number of ShangRing sizes for adult voluntary medical male circumcision (VMMC) within routine service delivery in Lusaka, Zambia. We conducted a randomized controlled trial and enrolled 500 HIV-negative men aged 18-49 years at 3 clinics. Participants were randomized to 1 of 2 study arms (Standard Sizing arm vs Modified Sizing arm) in a 1:1 ratio. All 14 adult ShangRing sizes (40-26 mm inner diameter, each varying by 1 mm) were available in the Standard Sizing arm; the Modified Sizing arm used every other size (40, 38, 36, 34, 32, 30, 28 mm inner diameter). Each participant was scheduled for 2 follow-up visits: the removal visit (day 7 after placement) and the healing check visit (day 42 after placement), when they were evaluated for adverse events (AEs), pain, and healing. Four hundred and ninety-six men comprised the analysis population, with 255 in the Standard Sizing arm and 241 in the Modified Sizing arm. Three men experienced a moderate or severe AEs (0.6%), including 2 in the Standard Sizing arm (0.8%) and 1 in the Modified Sizing arm (0.4%). 73.2% of participants were completely healed at the scheduled day 42 healing check visit, with similar percentages across study arms. Virtually all (99.6%) men, regardless of study arm, stated that they were very satisfied or satisfied with the appearance of their circumcised penis, and 98.6% stated that they would recommend ShangRing circumcision to family/friends. The moderate/severe AE rate was low and similar in the 2 study arms, suggesting that provision of one-half the number of adult device sizes is sufficient for safe service delivery. Effectiveness, time to healing, and acceptability were similar in the study arms. The simplicity of the ShangRing technique, and its relative speed, could facilitate VMMC program goals. In addition, sufficiency of fewer device sizes would simplify logistics and inventory.

  4. System and method for liquid extraction electrospray-assisted sample transfer to solution for chemical analysis

    DOEpatents

    Kertesz, Vilmos; Van Berkel, Gary J.

    2016-07-12

    A system for sampling a surface includes a surface sampling probe comprising a solvent liquid supply conduit and a distal end, and a sample collector for suspending a sample collection liquid adjacent to the distal end of the probe. A first electrode provides a first voltage to solvent liquid at the distal end of the probe. The first voltage produces a field sufficient to generate electrospray plume at the distal end of the probe. A second electrode provides a second voltage and is positioned to produce a plume-directing field sufficient to direct the electrospray droplets and ions to the suspended sample collection liquid. The second voltage is less than the first voltage in absolute value. A voltage supply system supplies the voltages to the first electrode and the second electrode. The first electrode can apply the first voltage directly to the solvent liquid. A method for sampling for a surface is also disclosed.

  5. Reflectance of metallic indium for solar energy applications

    NASA Technical Reports Server (NTRS)

    Bouquet, F. L.; Hasegawa, T.

    1984-01-01

    An investigation has been conducted in order to compile quantitative data on the reflective properties of metallic indium. The fabricated samples were of sufficiently high quality that differences from similar second-surface silvered mirrors were not apparent to the human eye. Three second-surface mirror samples were prepared by means of vacuum deposition techniques, yielding indium thicknesses of approximately 1000 A. Both hemispherical and specular measurements were made. It is concluded that metallic indium possesses a sufficiently high specular reflectance to be potentially useful in many solar energy applications.

  6. Stochastic stability properties of jump linear systems

    NASA Technical Reports Server (NTRS)

    Feng, Xiangbo; Loparo, Kenneth A.; Ji, Yuandong; Chizeck, Howard J.

    1992-01-01

    Jump linear systems are defined as a family of linear systems with randomly jumping parameters (usually governed by a Markov jump process) and are used to model systems subject to failures or changes in structure. The authors study stochastic stability properties in jump linear systems and the relationship among various moment and sample path stability properties. It is shown that all second moment stability properties are equivalent and are sufficient for almost sure sample path stability, and a testable necessary and sufficient condition for second moment stability is derived. The Lyapunov exponent method for the study of almost sure sample stability is discussed, and a theorem which characterizes the Lyapunov exponents of jump linear systems is presented.

  7. Plasticizing Effects of Polyamines in Protein-Based Films

    PubMed Central

    Sabbah, Mohammed; Di Pierro, Prospero; Giosafatto, C. Valeria L.; Esposito, Marilena; Mariniello, Loredana; Regalado-Gonzales, Carlos; Porta, Raffaele

    2017-01-01

    Zeta potential and nanoparticle size were determined on film forming solutions of native and heat-denatured proteins of bitter vetch as a function of pH and of different concentrations of the polyamines spermidine and spermine, both in the absence and presence of the plasticizer glycerol. Our results showed that both polyamines decreased the negative zeta potential of all samples under pH 8.0 as a consequence of their ionic interaction with proteins. At the same time, they enhanced the dimension of nanoparticles under pH 8.0 as a result of macromolecular aggregations. By using native protein solutions, handleable films were obtained only from samples containing either a minimum of 33 mM glycerol or 4 mM spermidine, or both compounds together at lower glycerol concentrations. However, 2 mM spermidine was sufficient to obtain handleable film by using heat-treated samples without glycerol. Conversely, brittle materials were obtained by spermine alone, thus indicating that only spermidine was able to act as an ionic plasticizer. Lastly, both polyamines, mainly spermine, were found able to act as “glycerol-like” plasticizers at concentrations higher than 5 mM under experimental conditions at which their amino groups are undissociated. Our findings open new perspectives in obtaining protein-based films by using aliphatic polycations as components. PMID:28489025

  8. Predicting discovery rates of genomic features.

    PubMed

    Gravel, Simon

    2014-06-01

    Successful sequencing experiments require judicious sample selection. However, this selection must often be performed on the basis of limited preliminary data. Predicting the statistical properties of the final sample based on preliminary data can be challenging, because numerous uncertain model assumptions may be involved. Here, we ask whether we can predict "omics" variation across many samples by sequencing only a fraction of them. In the infinite-genome limit, we find that a pilot study sequencing 5% of a population is sufficient to predict the number of genetic variants in the entire population within 6% of the correct value, using an estimator agnostic to demography, selection, or population structure. To reach similar accuracy in a finite genome with millions of polymorphisms, the pilot study would require ∼15% of the population. We present computationally efficient jackknife and linear programming methods that exhibit substantially less bias than the state of the art when applied to simulated data and subsampled 1000 Genomes Project data. Extrapolating based on the National Heart, Lung, and Blood Institute Exome Sequencing Project data, we predict that 7.2% of sites in the capture region would be variable in a sample of 50,000 African Americans and 8.8% in a European sample of equal size. Finally, we show how the linear programming method can also predict discovery rates of various genomic features, such as the number of transcription factor binding sites across different cell types. Copyright © 2014 by the Genetics Society of America.

  9. Challenges of DNA-based mark-recapture studies of American black bears

    USGS Publications Warehouse

    Settlage, K.E.; Van Manen, F.T.; Clark, J.D.; King, T.L.

    2008-01-01

    We explored whether genetic sampling would be feasible to provide a region-wide population estimate for American black bears (Ursus americanus) in the southern Appalachians, USA. Specifically, we determined whether adequate capture probabilities (p >0.20) and population estimates with a low coefficient of variation (CV <20%) could be achieved given typical agency budget and personnel constraints. We extracted DNA from hair collected from baited barbed-wire enclosures sampled over a 10-week period on 2 study areas: a high-density black bear population in a portion of Great Smoky Mountains National Park and a lower density population on National Forest lands in North Carolina, South Carolina, and Georgia. We identified individual bears by their unique genotypes obtained from 9 microsatellite loci. We sampled 129 and 60 different bears in the National Park and National Forest study areas, respectively, and applied closed mark–recapture models to estimate population abundance. Capture probabilities and precision of the population estimates were acceptable only for sampling scenarios for which we pooled weekly sampling periods. We detected capture heterogeneity biases, probably because of inadequate spatial coverage by the hair-trapping grid. The logistical challenges of establishing and checking a sufficiently high density of hair traps make DNA-based estimates of black bears impractical for the southern Appalachian region. Alternatives are to estimate population size for smaller areas, estimate population growth rates or survival using mark–recapture methods, or use independent marking and recapturing techniques to reduce capture heterogeneity.

  10. Comparison of Submental Blood Collection with the Retroorbital and Submandibular Methods in Mice (Mus musculus)

    PubMed Central

    Regan, Rainy D; Fenyk-Melody, Judy E; Tran, Sam M; Chen, Guang; Stocking, Kim L

    2016-01-01

    Nonterminal blood sample collection of sufficient volume and quality for research is complicated in mice due to their small size and anatomy. Large (>100 μL) nonterminal volumes of unhemolyzed or unclotted blood currently are typically collected from the retroorbital sinus or submandibular plexus. We developed a third method—submental blood collection—which is similar in execution to the submandibular method but with minor changes in animal restraint and collection location. Compared with other techniques, submental collection is easier to perform due to the direct visibility of the target vessels, which are located in a sparsely furred region. Compared with the submandibular method, the submental method did not differ regarding weight change and clotting score but significantly decreased hemolysis and increased the overall number of high-quality samples. The submental method was performed with smaller lancets for the majority of the bleeds, yet resulted in fewer repeat collection attempts, fewer insufficient samples, and less extraneous blood loss and was qualitatively less traumatic. Compared with the retroorbital technique, the submental method was similar regarding weight change but decreased hemolysis, clotting, and the number of overall high-quality samples; however the retroorbital method resulted in significantly fewer incidents of insufficient sample collection. Extraneous blood loss was roughly equivalent between the submental and retroorbital methods. We conclude that the submental method is an acceptable venipuncture technique for obtaining large, nonterminal volumes of blood from mice. PMID:27657712

  11. Preanalytical Errors in Hematology Laboratory- an Avoidable Incompetence.

    PubMed

    HarsimranKaur, Vikram Narang; Selhi, Pavneet Kaur; Sood, Neena; Singh, Aminder

    2016-01-01

    Quality assurance in the hematology laboratory is a must to ensure laboratory users of reliable test results with high degree of precision and accuracy. Even after so many advances in hematology laboratory practice, pre-analytical errors remain a challenge for practicing pathologists. This study was undertaken with an objective to evaluate the types and frequency of preanalytical errors in hematology laboratory of our center. All the samples received in the Hematology Laboratory of Dayanand Medical College and Hospital, Ludhiana, India over a period of one year (July 2013-July 2014) were included in the study and preanalytical variables like clotted samples, quantity not sufficient, wrong sample, without label, wrong label were studied. Of 471,006 samples received in the laboratory, preanalytical errors, as per the above mentioned categories was found in 1802 samples. The most common error was clotted samples (1332 samples, 0.28% of the total samples) followed by quantity not sufficient (328 sample, 0.06%), wrong sample (96 samples, 0.02%), without label (24 samples, 0.005%) and wrong label (22 samples, 0.005%). Preanalytical errors are frequent in laboratories and can be corrected by regular analysis of the variables involved. Rectification can be done by regular education of the staff.

  12. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. 46 CFR 76.33-5 - Zoning.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... having a combined volume not exceeding 5,000 cubic feet may be connected on the same zone. (d) Where a space is of such size that one accumulator is not sufficient, not more than two accumulators may be...

  14. 46 CFR 76.33-5 - Zoning.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... having a combined volume not exceeding 5,000 cubic feet may be connected on the same zone. (d) Where a space is of such size that one accumulator is not sufficient, not more than two accumulators may be...

  15. 38 CFR 59.130 - General requirements for all State home facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Veterans Affairs, Office of Regulations Management (02D), Room 1154, 810 Vermont Avenue, NW, Washington, DC... of power must be an on-site emergency standby generator of sufficient size to serve the connected...

  16. Mesoporous metal oxides and processes for preparation thereof

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suib, Steven L.; Poyraz, Altug Suleyman

    A process for preparing a mesoporous metal oxide, i.e., transition metal oxide. Lanthanide metal oxide, a post-transition metal oxide and metalloid oxide. The process comprises providing an acidic mixture comprising a metal precursor, an interface modifier, a hydrotropic ion precursor, and a surfactant; and heating the acidic mixture at a temperature and for a period of time sufficient to form the mesoporous metal oxide. A mesoporous metal oxide prepared by the above process. A method of controlling nano-sized wall crystallinity and mesoporosity in mesoporous metal oxides. The method comprises providing an acidic mixture comprising a metal precursor, an interface modifier,more » a hydrotropic ion precursor, and a surfactant; and heating the acidic mixture at a temperature and for a period of time sufficient to control nano-sized wall crystallinity and mesoporosity in the mesoporous metal oxides. Mesoporous metal oxides and a method of tuning structural properties of mesoporous metal oxides.« less

  17. Optimal Distinctiveness Signals Membership Trust.

    PubMed

    Leonardelli, Geoffrey J; Loyd, Denise Lewin

    2016-07-01

    According to optimal distinctiveness theory, sufficiently small minority groups are associated with greater membership trust, even among members otherwise unknown, because the groups are seen as optimally distinctive. This article elaborates on the prediction's motivational and cognitive processes and tests whether sufficiently small minorities (defined by relative size; for example, 20%) are associated with greater membership trust relative to mere minorities (45%), and whether such trust is a function of optimal distinctiveness. Two experiments, examining observers' perceptions of minority and majority groups and using minimal groups and (in Experiment 2) a trust game, revealed greater membership trust in minorities than majorities. In Experiment 2, participants also preferred joining minorities over more powerful majorities. Both effects occurred only when minorities were 20% rather than 45%. In both studies, perceptions of optimal distinctiveness mediated effects. Discussion focuses on the value of relative size and optimal distinctiveness, and when membership trust manifests. © 2016 by the Society for Personality and Social Psychology, Inc.

  18. Biobriefcase aerosol collector

    DOEpatents

    Bell, Perry M [Tracy, CA; Christian, Allen T [Madison, WI; Bailey, Christopher G [Pleasanton, CA; Willis, Ladona [Manteca, CA; Masquelier, Donald A [Tracy, CA; Nasarabadi, Shanavaz L [Livermore, CA

    2009-09-22

    A system for sampling air and collecting particles entrained in the air that potentially include bioagents. The system comprises providing a receiving surface, directing a liquid to the receiving surface and producing a liquid surface. Collecting samples of the air and directing the samples of air so that the samples of air with particles entrained in the air impact the liquid surface. The particles potentially including bioagents become captured in the liquid. The air with particles entrained in the air impacts the liquid surface with sufficient velocity to entrain the particles into the liquid but cause minor turbulence. The liquid surface has a surface tension and the collector samples the air and directs the air to the liquid surface so that the air with particles entrained in the air impacts the liquid surface with sufficient velocity to entrain the particles into the liquid, but cause minor turbulence on the surface resulting in insignificant evaporation of the liquid.

  19. A comparison of English and Japanese taste languages: taste descriptive methodology, codability and the umami taste.

    PubMed

    O'Mahony, M; Ishii, R

    1986-05-01

    Everyday taste descriptions for a range of stimuli were obtained from selected groups of American and Japanese subjects, using a variety of stimuli, stimulus presentation procedures and response conditions. In English there was a tendency to use a quadrapartite classification system: 'sweet', 'sour', 'salty' and 'bitter'. The Japanese had a different strategy, adding a fifth label: 'Ajinomoto', referring to the taste of monosodium glutamate. This label was generally replaced by umami--the scientific term--by Japanese who were workers or trained tasters involved with glutamate manufacture. Cultural differences in taste language have consequences for taste psychophysicists who impose a quadrapartite restriction on allowable taste descriptions. Stimulus presentation by filter-paper or aqueous solution elicited the same response trends. Language codability was only an indicator of degree of taste mixedness/singularity if used statistically with samples of sufficient size; it had little value as an indicator for individual subjects.

  20. Carbon distribution profiles in lunar fines

    NASA Technical Reports Server (NTRS)

    Hart, R. K.

    1977-01-01

    Radial distribution profiles of elemental carbon in lunar soils consisting of particles in the size range of 50 to 150 microns were investigated. Initial experiments on specimen preparation and the analysis of prepared specimens by Auger electron spectrometry (AES) and scanning electron microscopy (SEM) are described. Results from splits of samples 61501,84 and 64421,11, which were mounted various ways in several specimen holders, are presented. A low carbon content was observed in AES spectra from soil particles that were subjected to sputter-ion cleaning with 960eV argon ions for periods of time up to a total exposure for one hour. This ion charge was sufficient to remove approximately 70 nm of material from the surface. All of the physically adsorbed carbon (as well as water vapor, etc.) would normally be removed in the first few minutes, leaving only carbon in the specimen, and metal support structure, to be detected thereafter.

  1. Implications of clinical trial design on sample size requirements.

    PubMed

    Leon, Andrew C

    2008-07-01

    The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.

  2. New constraints on Lyman-α opacity using 92 quasar lines of sight

    NASA Astrophysics Data System (ADS)

    Bosman, Sarah E. I.; Fan, Xiaohui; Jiang, Linhua; Reed, Sophie; Matsuoka, Yoshiki; Becker, George; Rorai, Albert

    2018-05-01

    The large scatter in Lyman-α opacity at z > 5.3 has been an ongoing mystery, prompting a flurry of numerical models. A uniform ultra-violet background has been ruled out at those redshifts, but it is unclear whether any proposed models produce sufficient inhomogeneities. In this paper we provide an update on the measurement which first highlighted the issue: Lyman-α effective optical depth along high-z quasar lines of sight. We nearly triple on the previous sample size in such a study thanks to the cooperation of the DES-VHS, SHELLQs, and SDSS collaborations as well as new reductions and spectra. We find that a uniform UVB model is ruled out at 5.1 < z < 5.3, as well as higher redshifts, which is perplexing. We provide the first such measurements at z ~ 6. None of the numerical models we confronted to this data could reproduce the observed scatter.

  3. Formation of mono-layered gold nanoparticles in shallow depth of SiO 2 thin film by low-energy negative-ion implantation

    NASA Astrophysics Data System (ADS)

    Tsuji, H.; Arai, N.; Ueno, K.; Matsumoto, T.; Gotoh, N.; Adachi, K.; Kotaki, H.; Gotoh, Y.; Ishikawa, J.

    2006-01-01

    Mono-layered gold nanoparticles just below the surface of silicon oxide film have been formed by a gold negative-ion implantation at a very low-energy, where the deviation of implanted atoms was sufficiently narrow comparing to the size of nanoparticles. Gold negative ions were implanted into SiO2 thin films on Si substrate at energies of 35, 15 and 1 keV. The samples were annealed in Ar flow for 1 h at 900 or 1000 °C. Cross-sectional TEM observation for the implantation at 1 keV showed existence of Au nanoparticles aligned in the same depth of 5 nm from the surface. The nanoparticles had almost same diameter of 7 nm. The nanoparticles were found to be gold single crystal from a high-resolution TEM image.

  4. Myocardial Viability: From Proof of Concept to Clinical Practice

    PubMed Central

    Tan, Timothy C.; Hsu, Chijen; Denniss, Alan Robert

    2016-01-01

    Ischaemic left ventricular (LV) dysfunction can arise from myocardial stunning, hibernation, or necrosis. Imaging modalities have become front-line methods in the assessment of viable myocardial tissue, with the aim to stratify patients into optimal treatment pathways. Initial studies, although favorable, lacked sufficient power and sample size to provide conclusive outcomes of viability assessment. Recent trials, including the STICH and HEART studies, have failed to confer prognostic benefits of revascularisation therapy over standard medical management in ischaemic cardiomyopathy. In lieu of these recent findings, assessment of myocardial viability therefore should not be the sole factor for therapy choice. Optimization of medical therapy is paramount, and physicians should feel comfortable in deferring coronary revascularisation in patients with coronary artery disease with reduced LV systolic function. Newer trials are currently underway and will hopefully provide a more complete understanding of the pathos and management of ischaemic cardiomyopathy. PMID:27313943

  5. X-ray Emission Line Anisotropy Effects on the Isoelectronic Temperature Measurement Method

    NASA Astrophysics Data System (ADS)

    Liedahl, Duane; Barrios, Maria; Brown, Greg; Foord, Mark; Gray, William; Hansen, Stephanie; Heeter, Robert; Jarrott, Leonard; Mauche, Christopher; Moody, John; Schneider, Marilyn; Widmann, Klaus

    2016-10-01

    Measurements of the ratio of analogous emission lines from isoelectronic ions of two elements form the basis of the isoelectronic method of inferring electron temperatures in laser-produced plasmas, with the expectation that atomic modeling errors cancel to first order. Helium-like ions are a common choice in many experiments. Obtaining sufficiently bright signals often requires sample sizes with non-trivial line optical depths. For lines with small destruction probabilities per scatter, such as the 1s2p-1s2 He-like resonance line, repeated scattering can cause a marked angular dependence in the escaping radiation. Isoelectronic lines from near-Z equimolar dopants have similar optical depths and similar angular variations, which leads to a near angular-invariance for their line ratios. Using Monte Carlo simulations, we show that possible ambiguities associated with anisotropy in deriving electron temperatures from X-ray line ratios are minimized by exploiting this isoelectronic invariance.

  6. Application of Fe Isotopes to the Search for Life and Habitable Planets

    NASA Technical Reports Server (NTRS)

    Johnson, Clark M.; Beard, Brian L.; Nealson, Kenneth L.

    2001-01-01

    The relatively new field of Fe isotope geochemistry can make important contributions to tracing the geochemical cycling of Fe, which bears on issues such as metabolic processing of Fe, surface redox conditions, and development of planetary atmospheres and biospheres. It appears that Fe isotope fractionation in nature and the lab spans about 4 per mil (%) in Fe-56/Fe-54, and although this range is small, our new analytical methods produce a precision of +/- 0.05% on sample sizes as small as 100 ng (10(exp -7) g); this now provides us with a sufficient "signal-to-noise" ratio to make this isotope system useful. We review our work in three areas: 1) the terrestrial and lunar rock record, 2) experiments on inorganic fractionation, and 3) experiments involving biological processing of Fe. Additional information is contained in the original extended abstract.

  7. CORRELATION PURSUIT: FORWARD STEPWISE VARIABLE SELECTION FOR INDEX MODELS

    PubMed Central

    Zhong, Wenxuan; Zhang, Tingting; Zhu, Yu; Liu, Jun S.

    2012-01-01

    In this article, a stepwise procedure, correlation pursuit (COP), is developed for variable selection under the sufficient dimension reduction framework, in which the response variable Y is influenced by the predictors X1, X2, …, Xp through an unknown function of a few linear combinations of them. Unlike linear stepwise regression, COP does not impose a special form of relationship (such as linear) between the response variable and the predictor variables. The COP procedure selects variables that attain the maximum correlation between the transformed response and the linear combination of the variables. Various asymptotic properties of the COP procedure are established, and in particular, its variable selection performance under diverging number of predictors and sample size has been investigated. The excellent empirical performance of the COP procedure in comparison with existing methods are demonstrated by both extensive simulation studies and a real example in functional genomics. PMID:23243388

  8. Quantifying the uncertainty in heritability

    PubMed Central

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-01-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large. PMID:24670270

  9. Sample size calculations for randomized clinical trials published in anesthesiology journals: a comparison of 2010 versus 2016.

    PubMed

    Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M

    2018-06-01

    Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.

  10. Running Out of Time: Why Elephants Don't Gallop

    NASA Astrophysics Data System (ADS)

    Noble, Julian V.

    2001-11-01

    The physics of high speed running implies that galloping becomes impossible for sufficiently large animals. Some authors have suggested that because the strength/weight ratio decreases with size and eventually renders large animals excessively liable to injury when they attempt to gallop. This paper suggests that large animals cannot move their limbs sufficiently rapidly to take advantage of leaving the ground, hence are restricted to walking gaits. >From this point of view the relatively low strength/weight ratio of elephants follows from their inability to gallop, rather than causing it.

  11. 12 CFR 715.8 - Requirements for verification of accounts and passbooks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... selection: (ii) A sample which is representative of the population from which it was selected; (iii) An equal chance of selecting each dollar in the population; (iv) Sufficient accounts in both number and... consistent with GAAS if such methods provide for: (i) Sufficient accounts in both number and scope on which...

  12. Empirical evaluation of sufficient similarity in dose-response for environmental risk assessment of a mixture of 11 pyrethroids.

    EPA Science Inventory

    Chemical mixtures in the environment are often the result of a dynamic process. When dose-response data are available on random samples throughout the process, equivalence testing can be used to determine whether the mixtures are sufficiently similar based on a pre-specified biol...

  13. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  14. Impact of sample collection participation on the validity of estimated measures of association in the National Birth Defects Prevention Study when assessing gene-environment interactions.

    PubMed

    Jenkins, Mary M; Reefhuis, Jennita; Herring, Amy H; Honein, Margaret A

    2017-12-01

    To better understand the impact that nonresponse for specimen collection has on the validity of estimates of association, we examined associations between self-reported maternal periconceptional smoking, folic acid use, or pregestational diabetes mellitus and six birth defects among families who did and did not submit buccal cell samples for DNA following a telephone interview as part of the National Birth Defects Prevention Study (NBDPS). Analyses included control families with live born infants who had no birth defects (N = 9,465), families of infants with anorectal atresia or stenosis (N = 873), limb reduction defects (N = 1,037), gastroschisis (N = 1,090), neural tube defects (N = 1,764), orofacial clefts (N = 3,836), or septal heart defects (N = 4,157). Estimated dates of delivery were between 1997 and 2009. For each exposure and birth defect, odds ratios and 95% confidence intervals were calculated using logistic regression stratified by race-ethnicity and sample collection status. Tests for interaction were applied to identify potential differences between estimated measures of association based on sample collection status. Significant differences in estimated measures of association were observed in only four of 48 analyses with sufficient sample sizes. Despite lower than desired participation rates in buccal cell sample collection, this validation provides some reassurance that the estimates obtained for sample collectors and noncollectors are comparable. These findings support the validity of observed associations in gene-environment interaction studies for the selected exposures and birth defects among NBDPS participants who submitted DNA samples. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  15. Method for producing size selected particles

    DOEpatents

    Krumdick, Gregory K.; Shin, Young Ho; Takeya, Kaname

    2016-09-20

    The invention provides a system for preparing specific sized particles, the system comprising a continuous stir tank reactor adapted to receive reactants; a centrifugal dispenser positioned downstream from the reactor and in fluid communication with the reactor; a particle separator positioned downstream of the dispenser; and a solution stream return conduit positioned between the separator and the reactor. Also provided is a method for preparing specific sized particles, the method comprising introducing reagent into a continuous stir reaction tank and allowing the reagents to react to produce product liquor containing particles; contacting the liquor particles with a centrifugal force for a time sufficient to generate particles of a predetermined size and morphology; and returning unused reagents and particles of a non-predetermined size to the tank.

  16. Correcting speckle contrast at small speckle size to enhance signal to noise ratio for laser speckle contrast imaging.

    PubMed

    Qiu, Jianjun; Li, Yangyang; Huang, Qin; Wang, Yang; Li, Pengcheng

    2013-11-18

    In laser speckle contrast imaging, it was usually suggested that speckle size should exceed two camera pixels to eliminate the spatial averaging effect. In this work, we show the benefit of enhancing signal to noise ratio by correcting the speckle contrast at small speckle size. Through simulations and experiments, we demonstrated that local speckle contrast, even at speckle size much smaller than one pixel size, can be corrected through dividing the original speckle contrast by the static speckle contrast. Moreover, we show a 50% higher signal to noise ratio of the speckle contrast image at speckle size below 0.5 pixel size than that at speckle size of two pixels. These results indicate the possibility of selecting a relatively large aperture to simultaneously ensure sufficient light intensity and high accuracy and signal to noise ratio, making the laser speckle contrast imaging more flexible.

  17. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    PubMed Central

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  19. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    PubMed

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  20. Aerosol detection efficiency in inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Hubbard, Joshua A.; Zigmond, Joseph A.

    2016-05-01

    An electrostatic size classification technique was used to segregate particles of known composition prior to being injected into an inductively coupled plasma mass spectrometer (ICP-MS). Size-segregated particles were counted with a condensation nuclei counter as well as sampled with an ICP-MS. By injecting particles of known size, composition, and aerosol concentration into the ICP-MS, efficiencies of the order of magnitude aerosol detection were calculated, and the particle size dependencies for volatile and refractory species were quantified. Similar to laser ablation ICP-MS, aerosol detection efficiency was defined as the rate at which atoms were detected in the ICP-MS normalized by the rate at which atoms were injected in the form of particles. This method adds valuable insight into the development of technologies like laser ablation ICP-MS where aerosol particles (of relatively unknown size and gas concentration) are generated during ablation and then transported into the plasma of an ICP-MS. In this study, we characterized aerosol detection efficiencies of volatile species gold and silver along with refractory species aluminum oxide, cerium oxide, and yttrium oxide. Aerosols were generated with electrical mobility diameters ranging from 100 to 1000 nm. In general, it was observed that refractory species had lower aerosol detection efficiencies than volatile species, and there were strong dependencies on particle size and plasma torch residence time. Volatile species showed a distinct transition point at which aerosol detection efficiency began decreasing with increasing particle size. This critical diameter indicated the largest particle size for which complete particle detection should be expected and agreed with theories published in other works. Aerosol detection efficiencies also displayed power law dependencies on particle size. Aerosol detection efficiencies ranged from 10- 5 to 10- 11. Free molecular heat and mass transfer theory was applied, but evaporative phenomena were not sufficient to explain the dependence of aerosol detection on particle diameter. Additional work is needed to correlate experimental data with theory for metal-oxides where thermodynamic property data are sparse relative to pure elements. Lastly, when matrix effects and the diffusion of ions inside the plasma were considered, mass loading was concluded to have had an effect on the dependence of detection efficiency on particle diameter.

Top