Sample records for equal sample sizes

  1. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  2. A Comparison of the Exact Kruskal-Wallis Distribution to Asymptotic Approximations for All Sample Sizes up to 105

    ERIC Educational Resources Information Center

    Meyer, J. Patrick; Seaman, Michael A.

    2013-01-01

    The authors generated exact probability distributions for sample sizes up to 35 in each of three groups ("n" less than or equal to 105) and up to 10 in each of four groups ("n" less than or equal to 40). They compared the exact distributions to the chi-square, gamma, and beta approximations. The beta approximation was best in…

  3. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    PubMed

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  5. Using the Student's "t"-Test with Extremely Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C .F.

    2013-01-01

    Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…

  6. On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1984-01-01

    Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)

  7. Sample allocation balancing overall representativeness and stratum precision.

    PubMed

    Diaz-Quijano, Fredi Alexander

    2018-05-07

    In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    ERIC Educational Resources Information Center

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  9. A Note on Maximized Posttest Contrasts.

    ERIC Educational Resources Information Center

    Williams, John D.

    1979-01-01

    Hollingsworth recently showed a posttest contrast for analysis of variance situations that, for equal sample sizes, had several favorable qualities. However, for unequal sample sizes, the contrast fails to achieve status as a maximized contrast; thus, separate testing of the contrast is required. (Author/GSK)

  10. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  11. Investigation of specimen size effects by in-situ microcompression of equal channel angular pressed copper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howard, C.; Frazer, D.; Lupinacci, A.

    Here, micropillar compression testing was implemented on Equal Channel Angular Pressed copper samples ranging from 200 nm to 10 µm in side length in order to measure the mechanical properties yield strength, first load drop during plastic deformation at which there was a subsequent stress decrease with increasing strain, work hardening, and strain hardening exponent. Several micropillars containing multiple grains were investigated in a 200 nm grain sample. The effective pillar diameter to grain size ratios, D/d, were measured to be between 1.9 and 27.2. Specimens having D/d ratios between 0.2 and 5 were investigated in a second sample thatmore » was annealed at 200 °C for 2 h with an average grain size of 1.3 µm. No yield strength or elastic modulus size effects were observed in specimens in the 200 nm grain size sample. However work hardening increases with a decrease in critical ratios and first stress drops occur at much lower stresses for specimens with D/d ratios less than 5. For comparison, bulk tensile testing of both samples was performed, and the yield strength values of all micropillar compression tests for the 200 nm grained sample are in good agreement with the yield strength values of the tensile tests.« less

  12. Investigation of specimen size effects by in-situ microcompression of equal channel angular pressed copper

    DOE PAGES

    Howard, C.; Frazer, D.; Lupinacci, A.; ...

    2015-09-30

    Here, micropillar compression testing was implemented on Equal Channel Angular Pressed copper samples ranging from 200 nm to 10 µm in side length in order to measure the mechanical properties yield strength, first load drop during plastic deformation at which there was a subsequent stress decrease with increasing strain, work hardening, and strain hardening exponent. Several micropillars containing multiple grains were investigated in a 200 nm grain sample. The effective pillar diameter to grain size ratios, D/d, were measured to be between 1.9 and 27.2. Specimens having D/d ratios between 0.2 and 5 were investigated in a second sample thatmore » was annealed at 200 °C for 2 h with an average grain size of 1.3 µm. No yield strength or elastic modulus size effects were observed in specimens in the 200 nm grain size sample. However work hardening increases with a decrease in critical ratios and first stress drops occur at much lower stresses for specimens with D/d ratios less than 5. For comparison, bulk tensile testing of both samples was performed, and the yield strength values of all micropillar compression tests for the 200 nm grained sample are in good agreement with the yield strength values of the tensile tests.« less

  13. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    PubMed

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  14. Thermal conductivity measurements of particulate materials: 3. Natural samples and mixtures of particle sizes

    NASA Astrophysics Data System (ADS)

    Presley, Marsha A.; Craddock, Robert A.

    2006-09-01

    A line-heat source apparatus was used to measure thermal conductivities of natural fluvial and eolian particulate sediments under low pressures of a carbon dioxide atmosphere. These measurements were compared to a previous compilation of the dependence of thermal conductivity on particle size to determine a thermal conductivity-derived particle size for each sample. Actual particle-size distributions were determined via physical separation through brass sieves. Comparison of the two analyses indicates that the thermal conductivity reflects the larger particles within the samples. In each sample at least 85-95% of the particles by weight are smaller than or equal to the thermal conductivity-derived particle size. At atmospheric pressures less than about 2-3 torr, samples that contain a large amount of small particles (<=125 μm or 4 Φ) exhibit lower thermal conductivities relative to those for the larger particles within the sample. Nonetheless, 90% of the sample by weight still consists of particles that are smaller than or equal to this lower thermal conductivity-derived particle size. These results allow further refinement in the interpretation of geomorphologic processes acting on the Martian surface. High-energy fluvial environments should produce poorer-sorted and coarser-grained deposits than lower energy eolian environments. Hence these results will provide additional information that may help identify coarser-grained fluvial deposits and may help differentiate whether channel dunes are original fluvial sediments that are at most reworked by wind or whether they represent a later overprint of sediment with a separate origin.

  15. Chance-corrected classification for use in discriminant analysis: Ecological applications

    USGS Publications Warehouse

    Titus, K.; Mosher, J.A.; Williams, B.K.

    1984-01-01

    A method for evaluating the classification table from a discriminant analysis is described. The statistic, kappa, is useful to ecologists in that it removes the effects of chance. It is useful even with equal group sample sizes although the need for a chance-corrected measure of prediction becomes greater with more dissimilar group sample sizes. Examples are presented.

  16. Experimental design, power and sample size for animal reproduction experiments.

    PubMed

    Chapman, Phillip L; Seidel, George E

    2008-01-01

    The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.

  17. Test equality between two binary screening tests with a confirmatory procedure restricted on screen positives.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2015-01-01

    In studies of screening accuracy, we may commonly encounter the data in which a confirmatory procedure is administered to only those subjects with screen positives for ethical concerns. We focus our discussion on simultaneously testing equality of sensitivity and specificity between two binary screening tests when only subjects with screen positives receive the confirmatory procedure. We develop four asymptotic test procedures and one exact test procedure. We derive sample size calculation formula for a desired power of detecting a difference at a given nominal [Formula: see text]-level. We employ Monte Carlo simulation to evaluate the performance of these test procedures and the accuracy of the sample size calculation formula developed here in a variety of situations. Finally, we use the data obtained from a study of the prostate-specific-antigen test and digital rectal examination test on 949 Black men to illustrate the practical use of these test procedures and the sample size calculation formula.

  18. A note on sample size calculation for mean comparisons based on noncentral t-statistics.

    PubMed

    Chow, Shein-Chung; Shao, Jun; Wang, Hansheng

    2002-11-01

    One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.

  19. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  20. Effects of Group Size and Lack of Sphericity on the Recovery of Clusters in K-means Cluster Analysis.

    PubMed

    Craen, Saskia de; Commandeur, Jacques J F; Frank, Laurence E; Heiser, Willem J

    2006-06-01

    K-means cluster analysis is known for its tendency to produce spherical and equally sized clusters. To assess the magnitude of these effects, a simulation study was conducted, in which populations were created with varying departures from sphericity and group sizes. An analysis of the recovery of clusters in the samples taken from these populations showed a significant effect of lack of sphericity and group size. This effect was, however, not as large as expected, with still a recovery index of more than 0.5 in the "worst case scenario." An interaction effect between the two data aspects was also found. The decreasing trend in the recovery of clusters for increasing departures from sphericity is different for equal and unequal group sizes.

  1. The late Neandertal supraorbital fossils from Vindija Cave, Croatia: a biased sample?

    PubMed

    Ahern, James C M; Lee, Sang-Hee; Hawks, John D

    2002-09-01

    The late Neandertal sample from Vindija (Croatia) has been described as transitional between the earlier Central European Neandertals from Krapina (Croatia) and modern humans. However, the morphological differences indicating this transition may rather be the result of different sex and/or age compositions between the samples. This study tests the hypothesis that the metric differences between the Krapina and Vindija supraorbital samples are due to sampling bias. We focus upon the supraorbital region because past studies have posited this region as particularly indicative of the Vindija sample's transitional nature. Furthermore, the supraorbital region varies significantly with both age and sex. We analyzed four chords and two derived indices of supraorbital torus form as defined by Smith & Ranyard (1980, Am. J. phys. Anthrop.93, pp. 589-610). For each variable, we analyzed relative sample bias of the Krapina and Vindija samples using three sampling methods. In order to test the hypothesis that the Vindija sample contains an over-representation of females and/or young while the Krapina sample is normal or also female/young biased, we determined the probability of drawing a sample of the same size as and with a mean equal to or less than Vindija's from a Krapina-based population. In order to test the hypothesis that the Vindija sample is female/young biased while the Krapina sample is male/old biased, we determined the probability of drawing a sample of the same size as and with a mean equal or less than Vindija's from a generated population whose mean is halfway between Krapina's and Vindija's. Finally, in order to test the hypothesis that the Vindija sample is normal while the Krapina sample contains an over-representation of males and/or old, we determined the probability of drawing a sample of the same size as and with a mean equal to or greater than Krapina's from a Vindija-based population. Unless we assume that the Vindija sample is female/young and the Krapina sample is male/old biased, our results falsify the hypothesis that the metric differences between the Krapina and Vindija samples are due to sample bias.

  2. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    PubMed

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  3. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  4. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  5. Effects of Group Size and Lack of Sphericity on the Recovery of Clusters in K-Means Cluster Analysis

    ERIC Educational Resources Information Center

    de Craen, Saskia; Commandeur, Jacques J. F.; Frank, Laurence E.; Heiser, Willem J.

    2006-01-01

    K-means cluster analysis is known for its tendency to produce spherical and equally sized clusters. To assess the magnitude of these effects, a simulation study was conducted, in which populations were created with varying departures from sphericity and group sizes. An analysis of the recovery of clusters in the samples taken from these…

  6. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  7. Enhancing the Damping Behavior of Dilute Zn-0.3Al Alloy by Equal Channel Angular Pressing

    NASA Astrophysics Data System (ADS)

    Demirtas, M.; Atli, K. C.; Yanar, H.; Purcek, G.

    2017-06-01

    The effect of grain size on the damping capacity of a dilute Zn-0.3Al alloy was investigated. It was found that there was a critical strain value (≈1 × 10-4) below and above which damping of Zn-0.3Al showed dynamic and static/dynamic hysteresis behavior, respectively. In the dynamic hysteresis region, damping resulted from viscous sliding of phase/grain boundaries, and decreasing grain size increased the damping capacity. While the quenched sample with 100 to 250 µm grain size showed very limited damping capacity with a loss factor tanδ of less than 0.007, decreasing grain size down to 2 µm by equal channel angular pressing (ECAP) increased tanδ to 0.100 in this region. Dynamic recrystallization due to microplasticity at the sample surface was proposed as the damping mechanism for the first time in the region where the alloy showed the combined aspects of dynamic and static hysteresis damping. In this region, tanδ increased with increasing strain amplitude, and ECAPed sample showed a tanδ value of 0.256 at a strain amplitude of 2 × 10-3, the highest recorded so far in the damping capacity-related studies on ZA alloys.

  8. A more powerful test based on ratio distribution for retention noninferiority hypothesis.

    PubMed

    Deng, Ling; Chen, Gang

    2013-03-11

    Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.

  9. A general approach for sample size calculation for the three-arm 'gold standard' non-inferiority design.

    PubMed

    Stucke, Kathrin; Kieser, Meinhard

    2012-12-10

    In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Particle size distribution characteristics of cotton gin battery condenser system total particulate emissions

    USDA-ARS?s Scientific Manuscript database

    This report is part of a project to characterize cotton gin emissions from the standpoint of total particulate stack sampling and particle size analyses. In 2013, EPA published a more stringent standard for particulate matter with nominal diameter less than or equal to 2.5 µm (PM2.5). This created a...

  11. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  12. The sample handling system for the Mars Icebreaker Life mission: from dirt to data.

    PubMed

    Davé, Arwen; Thompson, Sarah J; McKay, Christopher P; Stoker, Carol R; Zacny, Kris; Paulsen, Gale; Mellerowicz, Bolek; Glass, Brian J; Willson, David; Bonaccorsi, Rosalba; Rask, Jon

    2013-04-01

    The Mars Icebreaker Life mission will search for subsurface life on Mars. It consists of three payload elements: a drill to retrieve soil samples from approximately 1 m below the surface, a robotic sample handling system to deliver the sample from the drill to the instruments, and the instruments themselves. This paper will discuss the robotic sample handling system. Collecting samples from ice-rich soils on Mars in search of life presents two challenges: protection of that icy soil--considered a "special region" with respect to planetary protection--from contamination from Earth, and delivery of the icy, sticky soil to spacecraft instruments. We present a sampling device that meets these challenges. We built a prototype system and tested it at martian pressure, drilling into ice-cemented soil, collecting cuttings, and transferring them to the inlet port of the SOLID2 life-detection instrument. The tests successfully demonstrated that the Icebreaker drill, sample handling system, and life-detection instrument can collectively operate in these conditions and produce science data that can be delivered via telemetry--from dirt to data. Our results also demonstrate the feasibility of using an air gap to prevent forward contamination. We define a set of six analog soils for testing over a range of soil cohesion, from loose sand to basalt soil, with angles of repose of 27° and 39°, respectively. Particle size is a key determinant of jamming of mechanical parts by soil particles. Jamming occurs when the clearance between moving parts is equal in size to the most common particle size or equal to three of these particles together. Three particles acting together tend to form bridges and lead to clogging. Our experiments show that rotary-hammer action of the Icebreaker drill influences the particle size, typically reducing particle size by ≈ 100 μm.

  13. Spatial studies of planetary nebulae with IRAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawkins, G.W.; Zuckerman, B.

    1991-06-01

    The infrared sizes at the four IRAS wavelengths of 57 planetaries, most with 20-60 arcsec optical size, are derived from spatial deconvolution of one-dimensional survey mode scans. Survey observations from multiple detectors and hours confirmed (HCON) observations are combined to increase the sampling to a rate that is sufficient for successful deconvolution. The Richardson-Lucy deconvolution algorithm is used to obtain an increase in resolution of a factor of about 2 or 3 from the normal IRAS detector sizes of 45, 45, 90, and 180 arcsec at wavelengths 12, 25, 60, and 100 microns. Most of the planetaries deconvolve at 12more » and 25 microns to sizes equal to or smaller than the optical size. Some of the planetaries with optical rings 60 arcsec or more in diameter show double-peaked IRAS profiles. Many, such as NGC 6720 and NGC 6543 show all infrared sizes equal to the optical size, while others indicate increasing infrared size with wavelength. Deconvolved IRAS profiles are presented for the 57 planetaries at nearly all wavelengths where IRAS flux densities are 1-2 Jy or higher. 60 refs.« less

  14. Sample size requirements for separating out the effects of combination treatments: randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis.

    PubMed

    Wolbers, Marcel; Heemskerk, Dorothee; Chau, Tran Thi Hong; Yen, Nguyen Thi Bich; Caws, Maxine; Farrar, Jeremy; Day, Jeremy

    2011-02-02

    In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 x 2 factorial design. We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 x 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Current Controlled Trials ISRCTN61649292.

  15. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    PubMed

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of the WPC or BPC can increase the required number of clusters. By illustrating how the parameters required for sample size calculations arise from the CRXO design and by providing guidance on both how to choose values for the parameters and perform the sample size calculations, the implementation of the sample size formulae for CRXO trials may improve.

  16. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    PubMed

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  17. 40 CFR Appendix A to Subpart E of... - Interim Transmission Electron Microscopy Analytical Methods-Mandatory and Nonmandatory-and...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pore size less than or equal to 0.45 µm. 6. Place these filters in series with a 5.0 µm backup filter... for not more than 30 seconds and replacing it at the time of sampling before sampling is initiated at.... Ensure that the sampler is turned upright before interrupting the pump flow. 21. Check that all samples...

  18. 40 CFR Appendix A to Subpart E of... - Interim Transmission Electron Microscopy Analytical Methods-Mandatory and Nonmandatory-and...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pore size less than or equal to 0.45 µm. 6. Place these filters in series with a 5.0 µm backup filter... for not more than 30 seconds and replacing it at the time of sampling before sampling is initiated at.... Ensure that the sampler is turned upright before interrupting the pump flow. 21. Check that all samples...

  19. 40 CFR Appendix A to Subpart E of... - Interim Transmission Electron Microscopy Analytical Methods-Mandatory and Nonmandatory-and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pore size less than or equal to 0.45 µm. 6. Place these filters in series with a 5.0 µm backup filter... for not more than 30 seconds and replacing it at the time of sampling before sampling is initiated at.... Ensure that the sampler is turned upright before interrupting the pump flow. 21. Check that all samples...

  20. Segmented polynomial taper equation incorporating years since thinning for loblolly pine plantations

    Treesearch

    A. Gordon Holley; Thomas B. Lynch; Charles T. Stiff; William Stansfield

    2010-01-01

    Data from 108 trees felled from 16 loblolly pine stands owned by Temple-Inland Forest Products Corp. were used to determine effects of years since thinning (YST) on stem taper using the Max–Burkhart type segmented polynomial taper model. Sample tree YST ranged from two to nine years prior to destructive sampling. In an effort to equalize sample sizes, tree data were...

  1. Effect of MeV electron irradiation on the free volume of polyimide

    NASA Astrophysics Data System (ADS)

    Alegaonkar, P. S.; Bhoraskar, V. N.

    2004-08-01

    The free volume of the microvoids in the polyimide samples, irradiated with 6 MeV electrons, was measured by the positron annihilation technique. The free volume initially decreased the virgin value from similar to13.70 to similar to10.98 Angstrom(3) and then increased to similar to18.11 Angstrom(3) with increasing the electron fluence, over the range of 5 x 10(14) - 5 x 10(15) e/cm(2). The evolution of gaseous species from the polyimide during electron irradiation was confirmed by the residual gas analysis technique. The polyimide samples irradiated with 6 MeV electrons in AgNO3 solution were studied with the Rutherford back scattering technique. The diffusion of silver in these polyimide samples was observed for fluences >2 x 10(15) e/cm(2), at which microvoids of size greater than or equal to3 Angstrom are produced. Silver atoms did not diffuse in the polyimide samples, which were first irradiated with electrons and then immersed in AgNO3 solution. These results indicate that during electron irradiation, the microvoids with size greater than or equal to3 Angstrom were retained in the surface region through which silver atoms of size similar to2.88 Angstrom could diffuse into the polyimide. The average depth of diffusion of silver atoms in the polyimide was similar to2.5 mum.

  2. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  3. Experimental investigation of inhomogeneities, nanoscopic phase separation, and magnetism in arc melted Fe-Cu metals with equal atomic ratio of the constituents

    NASA Astrophysics Data System (ADS)

    Hassnain Jaffari, G.; Aftab, M.; Anjum, D. H.; Cha, Dongkyu; Poirier, Gerald; Ismat Shah, S.

    2015-12-01

    Composition gradient and phase separation at the nanoscale have been investigated for arc-melted and solidified with equiatomic Fe-Cu. Diffraction studies revealed that Fe and Cu exhibited phase separation with no trace of any mixing. Microscopy studies revealed that immiscible Fe-Cu form dense bulk nanocomposite. The spatial distribution of Fe and Cu showed existence of two distinct regions, i.e., Fe-rich and Cu-rich regions. Fe-rich regions have Cu precipitates of various sizes and different shapes, with Fe forming meshes or channels greater than 100 nm in size. On the other hand, the matrix of Cu-rich regions formed strips with fine strands of nanosized Fe. Macromagnetic response of the system showed ferromagnetic behavior with a magnetic moment being equal to about 2.13 μB/ Fe atom and a bulk like negligible value of coercivity over the temperature range of 5-300 K. Anisotropy constant has been calculated from various laws of approach to saturation, and its value is extracted to be equal to 1350 J/m3. Inhomogeneous strain within the Cu and Fe crystallites has been calculated for the (unannealed) sample solidified after arc-melting. Annealed sample also exhibited local inhomogeneity with removal of inhomogeneous strain and no appreciable change in magnetic character. However, for the annealed sample phase separated Fe exhibited homogenous strain.

  4. Reduction of Racial Disparities in Prostate Cancer

    DTIC Science & Technology

    2005-12-01

    erectile dysfunction , and female sexual dysfunction ). Wherever possible, the questions and scales employed on BACH were selected from published...Methods. A racially and ethnically diverse community-based survey of adults aged 30-79 years in Boston, Massachusetts. The BACH survey has...recruited adults in three racial/ethnic groups: Latino, African American, and White using a stratified cluster sample. The target sample size is equally

  5. In vitro and in vivo studies of biodegradable fine grained AZ31 magnesium alloy produced by equal channel angular pressing.

    PubMed

    Ratna Sunil, B; Sampath Kumar, T S; Chakkingal, Uday; Nandakumar, V; Doble, Mukesh; Devi Prasad, V; Raghunath, M

    2016-02-01

    The objective of the present work is to investigate the role of different grain sizes produced by equal channel angular pressing (ECAP) on the degradation behavior of magnesium alloy using in vitro and in vivo studies. Commercially available AZ31 magnesium alloy was selected and processed by ECAP at 300°C for up to four passes using route Bc. Grain refinement from a starting size of 46μm to a grain size distribution of 1-5μm was successfully achieved after the 4th pass. Wettability of ECAPed samples assessed by contact angle measurements was found to increase due to the fine grain structure. In vitro degradation and bioactivity of the samples studied by immersing in super saturated simulated body fluid (SBF 5×) showed rapid mineralization within 24h due to the increased wettability in fine grained AZ31 Mg alloy. Corrosion behavior of the samples assessed by weight loss and electrochemical tests conducted in SBF 5× clearly showed the prominent role of enhanced mineral deposition on ECAPed AZ31 Mg in controlling the abnormal degradation. Cytotoxicity studies by MTT colorimetric assay showed that all the samples are viable. Additionally, cell adhesion was excellent for ECAPed samples particularly for the 3rd and 4th pass samples. In vivo experiments conducted using New Zealand White rabbits clearly showed lower degradation rate for ECAPed sample compared with annealed AZ31 Mg alloy and all the samples showed biocompatibility and no health abnormalities were noticed in the animals after 60days of in vivo studies. These results suggest that the grain size plays an important role in degradation management of magnesium alloys and ECAP technique can be adopted to achieve fine grain structures for developing degradable magnesium alloys for biomedical applications. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  7. Sample size requirements for separating out the effects of combination treatments: Randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis

    PubMed Central

    2011-01-01

    Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Trial registration Current Controlled Trials ISRCTN61649292 PMID:21288326

  8. The Effect of Multicollinearity and the Violation of the Assumption of Normality on the Testing of Hypotheses in Regression Analysis.

    ERIC Educational Resources Information Center

    Vasu, Ellen S.; Elmore, Patricia B.

    The effects of the violation of the assumption of normality coupled with the condition of multicollinearity upon the outcome of testing the hypothesis Beta equals zero in the two-predictor regression equation is investigated. A monte carlo approach was utilized in which three differenct distributions were sampled for two sample sizes over…

  9. Effect of finite sample size on feature selection and classification: a simulation study.

    PubMed

    Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping

    2010-02-01

    The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.

  10. Monitoring Earth's Shortwave Reflectance: GEO Instrument Concept

    NASA Technical Reports Server (NTRS)

    Brageot, Emily; Mercury, Michael; Green, Robert; Mouroulis, Pantazis; Gerwe, David

    2015-01-01

    In this paper we present a GEO instrument concept dedicated to monitoring the Earth's global spectral reflectance with a high revisit rate. Based on our measurement goals, the ideal instrument needs to be highly sensitive (SNR greater than 100) and to achieve global coverage with spectral sampling (less than or equal to 10nm) and spatial sampling (less than or equal to 1km) over a large bandwidth (380-2510 nm) with a revisit time (greater than or equal to greater than or equal to 3x/day) sufficient to fully measure the spectral-radiometric-spatial evolution of clouds and confounding factor during daytime. After a brief study of existing instruments and their capabilities, we choose to use a GEO constellation of up to 6 satellites as a platform for this instrument concept in order to achieve the revisit time requirement with a single launch. We derive the main parameters of the instrument and show the above requirements can be fulfilled while retaining an instrument architecture as compact as possible by controlling the telescope aperture size and using a passively cooled detector.

  11. The effects of substrate size, surface area, and density on coat thickness of multi-particulate dosage forms.

    PubMed

    Heinicke, Grant; Matthews, Frank; Schwartz, Joseph B

    2005-01-01

    Drugs layering experiments were performed in a fluid bed fitted with a rotor granulator insert using diltiazem as a model drug. The drug was applied in various quantities to sugar spheres of different mesh sizes to give a series of drug-layered sugar spheres (cores) of different potency, size, and weight per particle. The drug presence lowered the bulk density of the cores in proportion to the quantity of added drug. Polymer coating of each core lot was performed in a fluid bed fitted with a Wurster insert. A series of polymer-coated cores (pellets) was removed from each coating experiment. The mean diameter of each core and each pellet sample was determined by image analysis. The rate of change of diameter on polymer addition was determined for each starting size of core and compared to calculated values. The core diameter was displaced from the line of best fit through the pellet diameter data. Cores of different potency with the same size distribution were made by layering increasing quantities of drug onto sugar spheres of decreasing mesh size. Equal quantities of polymer were applied to the same-sized core lots and coat thickness was measured. Weight/weight calculations predict equal coat thickness under these conditions, but measurable differences were found. Simple corrections to core charge weight in the Wurster insert were successfully used to manufacture pellets having the same coat thickness. The sensitivity of the image analysis technique in measuring particle size distributions (PSDs) was demonstrated by measuring a displacement in PSD after addition of 0.5% w/w talc to a pellet sample.

  12. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, BAGHOUSE FILTRATION PRODUCTS, BHA GROUP, INC., QP131 FILTER SAMPLE

    EPA Science Inventory

    Baghouse filtration products (BFPs) were evaluated by the Air Pollution Control Technology (APCT) Verification Center. The performance factor verified was the mean outlet particle concentration for the filter fabric as a function of the size of those particles equal to and smalle...

  13. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, BAGHOUSE FILTRATION PRODUCTS, W.L. GORE & ASSOCIATES, INC., L4427 FILTER SAMPLE

    EPA Science Inventory

    Baghouse filtration products (BFPs) were evaluated by the Air Pollution Control Technology (APCT) Verification Center. The performance factor verified was the mean outlet particle concentration for the filter fabric as a function of the size of those particles equal to and smalle...

  14. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, BAGHOUSE FILTRATION PRODUCTS, POLYMER GROUP, INC., DURAPEX PET FILTER SAMPLE

    EPA Science Inventory

    Baghouse filtration products (BFPs) were evaluated by the Air Pollution Control Technology (APCT) Verification Center. The performance factor verified was the mean outlet particle concentration for the filter fabric as a function of the size of those particles equal to and smalle...

  15. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT: BAGHOUSE FILTRATION PRODUCTS, W.L. GORE & ASSOCIATES, INC. LYSB3 FILTER SAMPLE

    EPA Science Inventory

    Baghouse filtration products (BFPs) were evaluated by the Air Pollution Control Technology (APCT) Verification Center. The performance factor verified was the mean outlet particle concentration for the filter fabric as a function of the size for particles equal to or smaller than...

  16. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, BAGHOUSE FILTRATION PRODUCTS, TETRATEC PTFE PRODUCTS, TETRATEX 6212 FILTER SAMPLE

    EPA Science Inventory

    Baghouse filtration products (BFPs) were evaluated by the Air Pollution Control Technology (APCT) Verification Center. The performance factor verified was the mean outlet particle concentration for the filter fabric as a function of the size of those particles equal to and smalle...

  17. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT: BAGHOUSE FILTRATION PRODUCTS, BWF AMERICA, INC. GRADE 700 MPS POLYESTER FELT FILTER SAMPLE

    EPA Science Inventory

    Baghouse filtration products (BFPs) were evaluated by the Air Pollution Control Technology (APCT) Verification Center. The performance factor verified was the mean outlet particle concentration for the filter fabric as a function of the size for particles equal to or smaller than...

  18. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    NASA Astrophysics Data System (ADS)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  19. A Bayesian sequential design using alpha spending function to control type I error.

    PubMed

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  20. The Effect of the Multivariate Box-Cox Transformation on the Power of MANOVA.

    ERIC Educational Resources Information Center

    Kirisci, Levent; Hsu, Tse-Chi

    Most of the multivariate statistical techniques rely on the assumption of multivariate normality. The effects of non-normality on multivariate tests are assumed to be negligible when variance-covariance matrices and sample sizes are equal. Therefore, in practice, investigators do not usually attempt to remove non-normality. In this simulation…

  1. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  2. The Septic Shock 3.0 Definition and Trials: A Vasopressin and Septic Shock Trial Experience.

    PubMed

    Russell, James A; Lee, Terry; Singer, Joel; Boyd, John H; Walley, Keith R

    2017-06-01

    The Septic Shock 3.0 definition could alter treatment comparisons in randomized controlled trials in septic shock. Our first hypothesis was that the vasopressin versus norepinephrine comparison and 28-day mortality of patients with Septic Shock 3.0 definition (lactate > 2 mmol/L) differ from vasopressin versus norepinephrine and mortality in Vasopressin and Septic Shock Trial. Our second hypothesis was that there are differences in plasma cytokine levels in Vasopressin and Septic Shock Trial for lactate less than or equal to 2 versus greater than 2 mmol/L. Retrospective analysis of randomized controlled trial. Multicenter ICUs. We compared vasopressin-to-norepinephrine group 28- and 90-day mortality in Vasopressin and Septic Shock Trial in lactate subgroups. We measured 39 cytokines to compare patients with lactate less than or equal to 2 versus greater than 2 mmol/L. Patients with septic shock with lactate greater than 2 mmol/L or less than or equal to 2 mmol/L, randomized to vasopressin or norepinephrine. Concealed vasopressin (0.03 U/min.) or norepinephrine infusions. The Septic Shock 3.0 definition would have decreased sample size by about half. The 28- and 90-day mortality rates were 10-12 % higher than the original Vasopressin and Septic Shock Trial mortality. There was a significantly (p = 0.028) lower mortality with vasopressin versus norepinephrine in lactate less than or equal to 2 mmol/L but no difference between treatment groups in lactate greater than 2 mmol/L. Nearly all cytokine levels were significantly higher in patients with lactate greater than 2 versus less than or equal to 2 mmol/L. The Septic Shock 3.0 definition decreased sample size by half and increased 28-day mortality rates by about 10%. Vasopressin lowered mortality versus norepinephrine if lactate was less than or equal to 2 mmol/L. Patients had higher plasma cytokines in lactate greater than 2 versus less than or equal to 2 mmol/L, a brisker cytokine response to infection. The Septic Shock 3.0 definition and our findings have important implications for trial design in septic shock.

  3. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    PubMed

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  4. Heavy-Element Abundances in Solar Energetic Particle Events

    NASA Technical Reports Server (NTRS)

    Reames, D. V.; Ng, C. K.

    2004-01-01

    We survey the relative abundances of elements with 1 < or equal to Z < or equal to 82 in solar energetic particle (SEP) events observed at 2-10 MeV/amu during nearly 9 years aboard the Wind spacecraft, with special emphasis on enhanced abundances of elements with Z > or equal to 34. Abundances of Fe/O again show a bimodal distribution with distinct contributions from impulsive and gradual SEP events as seen in earlier solar cycles. Periods with greatly enhanced abundances of (50 < or equal to Z < or equal to 56)/O, like those with enhanced (3)He/(4)He, fall prominently in the Fe-rich population of the impulsive SEP events. In a sample of the 39 largest impulsive events, 25 have measurable enhancements in (50 < or equal to z < or equal to 56)/O and (76 < or equal to Z < or equal to 82)/O, relative to coronal values, ranging from approx. 100 to 10,000. By contrast, in a sample of 45 large gradual events the corresponding enhancements vary from approx. 0.2 to 20. However, the magnitude of the heavy-element enhancements in impulsive events is less striking than their strong correlation with the Fe spectral index and flare size, with the largest enhancements occurring in flares with the steepest Fe spectra, the smallest Fe fluence, and the lowest X-ray intensity, as reported here for the first time. Thus it seems that small events with low energy input can produce only steep spectra of the dominant species but accelerate rare heavy elements with great efficiency, probably by selective absorption of resonant waves in the flare plasma. With increased energy input, enhancements diminish, as heavy ions are depleted, and spectra of the dominant species harden.

  5. Refinement of atomic and magnetic structures using neutron diffraction for synthesized bulk and nano-nickel zinc gallate ferrite

    NASA Astrophysics Data System (ADS)

    Ata-Allah, S. S.; Balagurov, A. M.; Hashhash, A.; Bobrikov, I. A.; Hamdy, Sh.

    2016-01-01

    The parent NiFe2O4 and Zn/Ga substituted spinel ferrite powders have been prepared by solid state reaction technique. As a typical example, the Ni0.7Zn0.3Fe1.5Ga0.5O4 sample has been prepared by sol-gel auto combustion method with the nano-scale crystallites size. X-ray and Mössbauer studies were carried out for the prepared samples. Structure and microstructure properties were investigated using the time-of-flight HRFD instrument at the IBR-2 pulsed reactor, at a temperatures range 15-473 K. The Rietveld refinement of the neutron diffraction data revealed that all samples possess cubic symmetry corresponding to the space group Fd3m. Cations distribution show that Ni2+ is a complete inverse spinel ion, while Ga3+ equally distributed between the two A and B-sublattices. The level of microstrains in bulk samples was estimated as very small while the size of coherently scattered domains is quite large. For nano-structured sample the domain size is around 120 Å.

  6. Cross-national variation in the size of sex differences in values: effects of gender equality.

    PubMed

    Schwartz, Shalom H; Rubel-Lifschitz, Tammy

    2009-07-01

    How does gender equality relate to men's and women's value priorities? It is hypothesized that, for both sexes, the importance of benevolence, universalism, stimulation, hedonism, and self-direction values increases with greater gender equality, whereas the importance of power, achievement, security, and tradition values decreases. Of particular relevance to the present study, increased gender equality should also permit both sexes to pursue more freely the values they inherently care about more. Drawing on evolutionary and role theories, the authors postulate that women inherently value benevolence and universalism more than men do, whereas men inherently value power, achievement, and stimulation more than women do. Thus, as gender equality increases, sex differences in these values should increase, whereas sex differences in other values should not be affected by increases in gender equality. Studies of 25 representative national samples and of students from 68 countries confirmed the hypotheses except for tradition values. Implications for cross-cultural research on sex differences in values and traits are discussed. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  7. Neandertal talus bones from El Sidrón site (Asturias, Spain): A 3D geometric morphometrics analysis.

    PubMed

    Rosas, Antonio; Ferrando, Anabel; Bastir, Markus; García-Tabernero, Antonio; Estalrrich, Almudena; Huguet, Rosa; García-Martínez, Daniel; Pastor, Juan Francisco; de la Rasilla, Marco

    2017-10-01

    The El Sidrón tali sample is assessed in an evolutionary framework. We aim to explore the relationship between Neandertal talus morphology and body size/shape. We test the hypothesis 1: talar Neandertal traits are influenced by body size, and the hypothesis 2: shape variables independent of body size correspond to inherited primitive features. We quantify 35 landmarks through 3D geometric morphometrics techniques to describe H. neanderthalensis-H. sapiens shape variation, by Mean Shape Comparisons, Principal Component, Phenetic Clusters, Minimum spanning tree analyses and partial least square and regression of talus shape on body variables. Shape variation correlated to body size is compared to Neandertals-Modern Humans (MH) evolutionary shape variation. The Neandertal sample is compared to early hominins. Neandertal talus presents trochlear hypertrophy, a larger equality of trochlear rims, a shorter neck, a more expanded head, curvature and an anterior location of the medial malleolar facet, an expanded and projected lateral malleolar facet and laterally expanded posterior calcaneal facet compared to MH. The Neandertal talocrural joint morphology is influenced by body size. The other Neandertal talus traits do not co-vary with it or not follow the same co-variation pattern as MH. Besides, the trochlear hypertrophy, the trochlear rims equality and the short neck could be inherited primitive features; the medial malleolar facet morphology could be an inherited primitive feature or a secondarily primitive trait; and the calcaneal posterior facet would be an autapomorphic feature of the Neandertal lineage. © 2017 Wiley Periodicals, Inc.

  8. CHARACTERIZATION OF SEVEN ULTRA-WIDE TRANS-NEPTUNIAN BINARIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Alex H.; Kavelaars, J. J.; Petit, Jean-Marc

    2011-12-10

    The low-inclination component of the Classical Kuiper Belt is host to a population of extremely widely separated binaries. These systems are similar to other trans-Neptunian binaries (TNBs) in that the primary and secondary components of each system are of roughly equal size. We have performed an astrometric monitoring campaign of a sample of seven wide-separation, long-period TNBs and present the first-ever well-characterized mutual orbits for each system. The sample contains the most eccentric (2006 CH{sub 69}, e{sub m} = 0.9) and the most widely separated, weakly bound (2001 QW{sub 322}, a/R{sub H} {approx_equal} 0.22) binary minor planets known, and alsomore » contains the system with lowest-measured mass of any TNB (2000 CF{sub 105}, M{sub sys} {approx_equal} 1.85 Multiplication-Sign 10{sup 17} kg). Four systems orbit in a prograde sense, and three in a retrograde sense. They have a different mutual inclination distribution compared to all other TNBs, preferring low mutual-inclination orbits. These systems have geometric r-band albedos in the range of 0.09-0.3, consistent with radiometric albedo estimates for larger solitary low-inclination Classical Kuiper Belt objects, and we limit the plausible distribution of albedos in this region of the Kuiper Belt. We find that gravitational collapse binary formation models produce an orbital distribution similar to that currently observed, which along with a confluence of other factors supports formation of the cold Classical Kuiper Belt in situ through relatively rapid gravitational collapse rather than slow hierarchical accretion. We show that these binary systems are sensitive to disruption via collisions, and their existence suggests that the size distribution of TNOs at small sizes remains relatively shallow.« less

  9. Confidence Intervals for the Mean: To Bootstrap or Not to Bootstrap

    ERIC Educational Resources Information Center

    Calzada, Maria E.; Gardner, Holly

    2011-01-01

    The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10,…

  10. Bayesian Power Prior Analysis and Its Application to Operational Risk and Rasch Model

    ERIC Educational Resources Information Center

    Zhang, Honglian

    2010-01-01

    When sample size is small, informative priors can be valuable in increasing the precision of estimates. Pooling historical data and current data with equal weights under the assumption that both of them are from the same population may be misleading when heterogeneity exists between historical data and current data. This is particularly true when…

  11. The Adequacy of Different Robust Statistical Tests in Comparing Two Independent Groups

    ERIC Educational Resources Information Center

    Pero-Cebollero, Maribel; Guardia-Olmos, Joan

    2013-01-01

    In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each…

  12. Robust Approximations to the Non-Null Distribution of the Product Moment Correlation Coefficient I: The Phi Coefficient.

    ERIC Educational Resources Information Center

    Edwards, Lynne K.; Meyers, Sarah A.

    Correlation coefficients are frequently reported in educational and psychological research. The robustness properties and optimality among practical approximations when phi does not equal 0 with moderate sample sizes are not well documented. Three major approximations and their variations are examined: (1) a normal approximation of Fisher's Z,…

  13. Low-cycle fatigue of Fe-20%Cr alloy processed by equal- channel angular pressing

    NASA Astrophysics Data System (ADS)

    Kaneko, Yoshihisa; Tomita, Ryuji; Vinogradov, Alexei

    2014-08-01

    Low-cycle fatigue properties were investigated on Fe-20%Cr ferritic stainless steel processed by equal channel angular pressing (ECAP). The Fe-20%Cr alloy bullets were processed for one to four passes via Route-Bc. The ECAPed samples were cyclically deformed at the constant plastic strain amplitude ɛpl of 5x10-4 at room temperature in air. After the 1-pass ECAP, low-angle grain boundaries were dominantly formed. During the low-cycle fatigue test, the 1-pass sample revealed the rapid softening which continued until fatigue fracture. Fatigue life of the 1-pass sample was shorter than that of a coarse-grained sample. After the 4-pass ECAP, the average grain size reduced down to about 1.5 μm. At initial stage of the low-cycle fatigue tests, the stress amplitude increased with increasing ECAP passes. At the samples processed for more than 2 passes, the cyclic softening was relatively moderate. It was found that fatigue life of the ECAPed Fe-20%Cr alloy excepting the 1-pass sample was improved as compared to the coarse-grained sample, even under the strain controlled fatigue condition.

  14. Thermal conductivity of nanocrystalline silicon: importance of grain size and frequency-dependent mean free paths.

    PubMed

    Wang, Zhaojie; Alaniz, Joseph E; Jang, Wanyoung; Garay, Javier E; Dames, Chris

    2011-06-08

    The thermal conductivity reduction due to grain boundary scattering is widely interpreted using a scattering length assumed equal to the grain size and independent of the phonon frequency (gray). To assess these assumptions and decouple the contributions of porosity and grain size, five samples of undoped nanocrystalline silicon have been measured with average grain sizes ranging from 550 to 64 nm and porosities from 17% to less than 1%, at temperatures from 310 to 16 K. The samples were prepared using current activated, pressure assisted densification (CAPAD). At low temperature the thermal conductivities of all samples show a T(2) dependence which cannot be explained by any traditional gray model. The measurements are explained over the entire temperature range by a new frequency-dependent model in which the mean free path for grain boundary scattering is inversely proportional to the phonon frequency, which is shown to be consistent with asymptotic analysis of atomistic simulations from the literature. In all cases the recommended boundary scattering length is smaller than the average grain size. These results should prove useful for the integration of nanocrystalline materials in devices such as advanced thermoelectrics.

  15. Effect of the centrifugal force on domain chaos in Rayleigh-Bénard convection.

    PubMed

    Becker, Nathan; Scheel, J D; Cross, M C; Ahlers, Guenter

    2006-06-01

    Experiments and simulations from a variety of sample sizes indicated that the centrifugal force significantly affects the domain-chaos state observed in rotating Rayleigh-Bénard convection-patterns. In a large-aspect-ratio sample, we observed a hybrid state consisting of domain chaos close to the sample center, surrounded by an annulus of nearly stationary nearly radial rolls populated by occasional defects reminiscent of undulation chaos. Although the Coriolis force is responsible for domain chaos, by comparing experiment and simulation we show that the centrifugal force is responsible for the radial rolls. Furthermore, simulations of the Boussinesq equations for smaller aspect ratios neglecting the centrifugal force yielded a domain precession-frequency f approximately epsilon(mu) with mu approximately equal to 1 as predicted by the amplitude-equation model for domain chaos, but contradicted by previous experiment. Additionally the simulations gave a domain size that was larger than in the experiment. When the centrifugal force was included in the simulation, mu and the domain size were consistent with experiment.

  16. Affected States Soft Independent Modeling by Class Analogy from the Relation Between Independent Variables, Number of Independent Variables and Sample Size

    PubMed Central

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz

    2013-01-01

    Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065

  17. Affected States soft independent modeling by class analogy from the relation between independent variables, number of independent variables and sample size.

    PubMed

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz

    2013-03-01

    The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.

  18. [A comparison of convenience sampling and purposive sampling].

    PubMed

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  19. Map Projections and the Visual Detective: How to Tell if a Map Is Equal-Area, Conformal, or Neither

    ERIC Educational Resources Information Center

    Olson, Judy M.

    2006-01-01

    The ability to see whether a map is equal-area, conformal, or neither is useful for looking intelligently at large-area maps. For example, only if a map is equal-area can reliable judgments of relative size be made. If a map is equal-area, latitude-longitude cells are equal in size between a given pair of parallels, the cells between a given pair…

  20. Effect on the grain size of single-mode microwave sintered NiCuZn ferrite and zinc titanate dielectric resonator ceramics.

    PubMed

    Sirugudu, Roopas Kiran; Vemuri, Rama Krishna Murthy; Venkatachalam, Subramanian; Gopalakrishnan, Anisha; Budaraju, Srinivasa Murty

    2011-01-01

    Microwave sintering of materials significantly depends on dielectric, magnetic and conductive Losses. Samples with high dielectric and magnetic loss such as ferrites could be sintered easily. But low dielectric loss material such as dielectric resonators (paraelectrics) finds difficulty in generation of heat during microwave interaction. Microwave sintering of materials of these two classes helps in understanding the variation in dielectric and magnetic characteristics with respect to the change in grain size. High-energy ball milled Ni0.6Cu0.2Zn0.2Fe1.98O4-delta and ZnTiO3 are sintered in conventional and microwave methods and characterized for respective dielectric and magnetic characteristics. The grain size variation with higher copper content is also observed with conventional and microwave sintering. The grain size in microwave sintered Ni0.6Cu0.2Zn0.2Fe1.98O4-delta is found to be much small and uniform in comparison with conventional sintered sample. However, the grain size of microwave sintered sample is almost equal to that of conventional sintered sample of Ni0.3Cu0.5Zn0.2Fe1.98O4-delta. In contrast to these high dielectric and magnetic loss ferrites, the paraelectric materials are observed to sinter in presence of microwaves. Although microwave sintered zinc titanate sample showed finer and uniform grains with respect to conventional samples, the dielectric characteristics of microwave sintered sample are found to be less than that of conventional sample. Low dielectric constant is attributed to the low density. Smaller grain size is found to be responsible for low quality factor and the presence of small percentage of TiO2 is observed to achieve the temperature stable resonant frequency.

  1. Does stereotype threat influence performance of girls in stereotyped domains? A meta-analysis.

    PubMed

    Flore, Paulette C; Wicherts, Jelte M

    2015-02-01

    Although the effect of stereotype threat concerning women and mathematics has been subject to various systematic reviews, none of them have been performed on the sub-population of children and adolescents. In this meta-analysis we estimated the effects of stereotype threat on performance of girls on math, science and spatial skills (MSSS) tests. Moreover, we studied publication bias and four moderators: test difficulty, presence of boys, gender equality within countries, and the type of control group that was used in the studies. We selected study samples when the study included girls, samples had a mean age below 18years, the design was (quasi-)experimental, the stereotype threat manipulation was administered between-subjects, and the dependent variable was a MSSS test related to a gender stereotype favoring boys. To analyze the 47 effect sizes, we used random effects and mixed effects models. The estimated mean effect size equaled -0.22 and significantly differed from 0. None of the moderator variables was significant; however, there were several signs for the presence of publication bias. We conclude that publication bias might seriously distort the literature on the effects of stereotype threat among schoolgirls. We propose a large replication study to provide a less biased effect size estimate. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  2. Noninferiority trial designs for odds ratios and risk differences.

    PubMed

    Hilton, Joan F

    2010-04-30

    This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa. 2010 John Wiley & Sons, Ltd.

  3. Biostatistics Series Module 5: Determining Sample Size

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437

  4. 76 FR 63216 - Small Business Size Standards: Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-12

    ... all firms within an industry equally, regardless of their size. The weighted average overcomes that... companies, all else being equal, SBA will establish a size standard higher than the anchor size standard.... Concentration among firms is a measure of inequality of distribution. To evaluate the degree of inequality of...

  5. Efficiency of a new bioaerosol sampler in sampling Betula pollen for antigen analyses.

    PubMed

    Rantio-Lehtimäki, A; Kauppinen, E; Koivikko, A

    1987-01-01

    A new bioaerosol sampler consisting of Liu-type atmospheric aerosol sampling inlet, coarse particle inertial impactor, two-stage high-efficiency virtual impactor (aerodynamic particle sizes respectively in diameter: greater than or equal to 8 microns, 8-2.5 microns, and 2.5 microns; sampling on filters) and a liquid-cooled condenser was designed, fabricated and field-tested in sampling birch (Betula) pollen grains and smaller particles containing Betula antigens. Both microscopical (pollen counts) and immunochemical (enzyme-linked immunosorbent assay) analyses of each stage were carried out. The new sampler was significantly more efficient than Burkard trap e.g. in sampling particles of Betula pollen size (ca. 25 microns in diameter). This was prominent during pollen peak periods (e.g. May 19th, 1985, in the virtual impactor 9482 and in the Burkard trap 2540 Betula p.g. X m-3 of air). Betula antigens were detected also in filter stages where no intact pollen grains were found; in the condenser unit the antigen concentrations instead were very low.

  6. On-chip collection of particles and cells by AC electroosmotic pumping and dielectrophoresis using asymmetric microelectrodes.

    PubMed

    Melvin, Elizabeth M; Moore, Brandon R; Gilchrist, Kristin H; Grego, Sonia; Velev, Orlin D

    2011-09-01

    The recent development of microfluidic "lab on a chip" devices requiring sample sizes <100 μL has given rise to the need to concentrate dilute samples and trap analytes, especially for surface-based detection techniques. We demonstrate a particle collection device capable of concentrating micron-sized particles in a predetermined area by combining AC electroosmosis (ACEO) and dielectrophoresis (DEP). The planar asymmetric electrode pattern uses ACEO pumping to induce equal, quadrilateral flow directed towards a stagnant region in the center of the device. A number of system parameters affecting particle collection efficiency were investigated including electrode and gap width, chamber height, applied potential and frequency, and number of repeating electrode pairs and electrode geometry. The robustness of the on-chip collection design was evaluated against varying electrolyte concentrations, particle types, and particle sizes. These devices are amenable to integration with a variety of detection techniques such as optical evanescent waveguide sensing.

  7. Autofluorescence spectroscopy of oral mucosa

    NASA Astrophysics Data System (ADS)

    Majumdar, S. K.; Uppal, A.; Gupta, P. K.

    1998-06-01

    We report the results of an in-vitro study on autofluorescence from pathologically characterized normal and malignant squamous tissues from the oral cavity. The study involved biopsy samples from 47 patients with oral cancer of which 11 patients had cancer of tongue, 17 of buccal mucosa and 19 of alveolus. The results of excitation and emission spectroscopy at several wavelengths (280 nm less than or equal to (lambda) exless than or equal to 460 nm; 340 nm less than or equal to (lambda) em less than or equal to 520 nm) showed that at (lambda) ex equals 337 nm and 400 nm the mean value for the spectrally integrated fluorescence intensity [(Sigma) (lambda ) IF((lambda) )] from the normal tissue sites was about a factor of 2 larger than that from the malignant tissue sites. At other excitation wavelengths the difference in (Sigma) (lambda ) IF((lambda) ) was not statistically significant. Similarly, for (lambda) em equals 390 nm and 460 nm, the intensity of the 340 nm band of the excitation spectra from normal tissues was observed to be a factor of 2 larger than that from malignant tissues. Analysis of these results suggests that NADH concentration is higher in normal oral tissues compared to the malignant. This contrasts with our earlier observation of an reduced NADH concentration in normal sites of breast tissues vis a vis malignant sites. For the 337 nm excited emission spectra a 10-variable MVLR score (using (Sigma) (lambda ) IF((lambda) ) and normalized intensities at nine wavelengths as input parameters) provided a sensitivity and specificity of 95.7% and 93.1% over the sample size investigated.

  8. Grain size statistics and depositional pattern of the Ecca Group sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa

    NASA Astrophysics Data System (ADS)

    Baiyegunhi, Christopher; Liu, Kuiwu; Gwavava, Oswald

    2017-11-01

    Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicate the dominance of low energy environment. The bivariate plots show that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are lacustrine or deltaic deposits. The C-M plots indicated that the sediments were deposited mainly by suspension and saltation, and graded suspension. Visher diagrams show that saltation is the major process of transportation, followed by suspension.

  9. Estimation of the vortex length scale and intensity from two-dimensional samples

    NASA Technical Reports Server (NTRS)

    Reuss, D. L.; Cheng, W. P.

    1992-01-01

    A method is proposed for estimating flow features that influence flame wrinkling in reciprocating internal combustion engines, where traditional statistical measures of turbulence are suspect. Candidate methods were tested in a computed channel flow where traditional turbulence measures are valid and performance can be rationally evaluated. Two concepts are tested. First, spatial filtering is applied to the two-dimensional velocity distribution and found to reveal structures corresponding to the vorticity field. Decreasing the spatial-frequency cutoff of the filter locally changes the character and size of the flow structures that are revealed by the filter. Second, vortex length scale and intensity is estimated by computing the ensemble-average velocity distribution conditionally sampled on the vorticity peaks. The resulting conditionally sampled 'average vortex' has a peak velocity less than half the rms velocity and a size approximately equal to the two-point-correlation integral-length scale.

  10. Problems associated with using filtration to define dissolved trace element concentrations in natural water samples

    USGS Publications Warehouse

    Horowitz, A.J.; Lum, K.R.; Garbarino, J.R.; Hall, G.E.M.; Lemieux, C.; Demas, C.R.

    1996-01-01

    Field and laboratory experiments indicate that a number of factors associated with filtration other than just pore size (e.g., diameter, manufacturer, volume of sample processed, amount of suspended sediment in the sample) can produce significant variations in the 'dissolved' concentrations of such elements as Fe, Al, Cu, Zn, Pb, Co, and Ni. The bulk of these variations result from the inclusion/exclusion of colloidally associated trace elements in the filtrate, although dilution and sorption/desorption from filters also may be factors. Thus, dissolved trace element concentrations quantitated by analyzing filtrates generated by processing whole water through similar pore-sized filters may not be equal or comparable. As such, simple filtration of unspecified volumes of natural water through unspecified 0.45-??m membrane filters may no longer represent an acceptable operational definition for a number of dissolved chemical constituents.

  11. The effect of membrane filtration on dissolved trace element concentrations

    USGS Publications Warehouse

    Horowitz, A.J.; Lum, K.R.; Garbarino, J.R.; Hall, G.E.M.; Lemieux, C.; Demas, C.R.

    1996-01-01

    The almost universally accepted operational definition for dissolved constituents is based on processing whole-water samples through a 0.45-??m membrane filter. Results from field and laboratory experiments indicate that a number of factors associated with filtration, other than just pore size (e.g., diameter, manufacturer, volume of sample processed, amount of suspended sediment in the sample), can produce substantial variations in the 'dissolved' concentrations of such elements as Fe, Al, Cu, Zn, Pb, Co, and Ni. These variations result from the inclusion/exclusion of colloidally- associated trace elements. Thus, 'dissolved' concentrations quantitated by analyzing filtrates generated by processing whole-water through similar pore- sized membrane filters may not be equal/comparable. As such, simple filtration through a 0.45-??m membrane filter may no longer represent an acceptable operational definition for dissolved chemical constituents. This conclusion may have important implications for environmental studies and regulatory agencies.

  12. A time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes with applications in substance abuse research.

    PubMed

    Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne

    2017-02-28

    This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  14. Sub-micron particle sampler apparatus and method for sampling sub-micron particles

    DOEpatents

    Gay, D.D.; McMillan, W.G.

    1984-04-12

    Apparatus and method steps for collecting sub-micron sized particles include a collection chamber and cryogenic cooling. The cooling is accomplished by coil tubing carrying nitrogen in liquid form, with the liquid nitrogen changing to the gas phase before exiting from the collection chamber in the tubing. Standard filters are used to filter out particles of diameter greater than or equal to 0.3 microns; however, the present invention is used to trap particles of less than 0.3 micron in diameter. A blower draws air to said collection chamber through a filter which filters particles with diameters greater than or equal to 0.3 micron. The air is then cryogenically cooled so that moisture and sub-micron sized particles in the air condense into ice on the coil. The coil is then heated so that the ice melts, and the liquid is then drawn off and passed through a Buchner funnel where the liquid is passed through a Nuclepore membrane. A vacuum draws the liquid through the Nuclepore membrane, with the Nuclepore membrane trapping sub-micron sized particles therein. The Nuclepore membrane is then covered on its top and bottom surfaces with sheets of Mylar and the assembly is then crushed into a pellet. This effectively traps the sub-micron sized particles for later analysis. 6 figures.

  15. Method for sampling sub-micron particles

    DOEpatents

    Gay, Don D.; McMillan, William G.

    1985-01-01

    Apparatus and method steps for collecting sub-micron sized particles include a collection chamber and cryogenic cooling. The cooling is accomplished by coil tubing carrying nitrogen in liquid form, with the liquid nitrogen changing to the gas phase before exiting from the collection chamber in the tubing. Standard filters are used to filter out particles of diameter greater than or equal to 0.3 microns; however the present invention is used to trap particles of less than 0.3 micron in diameter. A blower draws air to said collection chamber through a filter which filters particles with diameters greater than or equal to 0.3 micron. The air is then cryogenically cooled so that moisture and sub-micron sized particles in the air condense into ice on the coil. The coil is then heated so that the ice melts, and the liquid is then drawn off and passed through a Buchner funnel where the liquid is passed through a Nuclepore membrane. A vacuum draws the liquid through the Nuclepore membrane, with the Nuclepore membrane trapping sub-micron sized particles therein. The Nuclepore membrane is then covered on its top and bottom surfaces with sheets of Mylar.RTM. and the assembly is then crushed into a pellet. This effectively traps the sub-micron sized particles for later analysis.

  16. Sample size determination for GEE analyses of stepped wedge cluster randomized trials.

    PubMed

    Li, Fan; Turner, Elizabeth L; Preisser, John S

    2018-06-19

    In stepped wedge cluster randomized trials, intact clusters of individuals switch from control to intervention from a randomly assigned period onwards. Such trials are becoming increasingly popular in health services research. When a closed cohort is recruited from each cluster for longitudinal follow-up, proper sample size calculation should account for three distinct types of intraclass correlations: the within-period, the inter-period, and the within-individual correlations. Setting the latter two correlation parameters to be equal accommodates cross-sectional designs. We propose sample size procedures for continuous and binary responses within the framework of generalized estimating equations that employ a block exchangeable within-cluster correlation structure defined from the distinct correlation types. For continuous responses, we show that the intraclass correlations affect power only through two eigenvalues of the correlation matrix. We demonstrate that analytical power agrees well with simulated power for as few as eight clusters, when data are analyzed using bias-corrected estimating equations for the correlation parameters concurrently with a bias-corrected sandwich variance estimator. © 2018, The International Biometric Society.

  17. Massively parallel rRNA gene sequencing exacerbates the potential for biased community diversity comparisons due to variable library sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gihring, Thomas; Green, Stefan; Schadt, Christopher Warren

    2011-01-01

    Technologies for massively parallel sequencing are revolutionizing microbial ecology and are vastly increasing the scale of ribosomal RNA (rRNA) gene studies. Although pyrosequencing has increased the breadth and depth of possible rRNA gene sampling, one drawback is that the number of reads obtained per sample is difficult to control. Pyrosequencing libraries typically vary widely in the number of sequences per sample, even within individual studies, and there is a need to revisit the behaviour of richness estimators and diversity indices with variable gene sequence library sizes. Multiple reports and review papers have demonstrated the bias in non-parametric richness estimators (e.g.more » Chao1 and ACE) and diversity indices when using clone libraries. However, we found that biased community comparisons are accumulating in the literature. Here we demonstrate the effects of sample size on Chao1, ACE, CatchAll, Shannon, Chao-Shen and Simpson's estimations specifically using pyrosequencing libraries. The need to equalize the number of reads being compared across libraries is reiterated, and investigators are directed towards available tools for making unbiased diversity comparisons.« less

  18. The fundamentals of average local variance--Part II: Sampling simple regular patterns with optical imagery.

    PubMed

    Bøcher, Peder Klith; McCloy, Keith R

    2006-02-01

    In this investigation, the characteristics of the average local variance (ALV) function is investigated through the acquisition of images at different spatial resolutions of constructed scenes of regular patterns of black and white squares. It is shown that the ALV plot consistently peaks at a spatial resolution in which the pixels has a size corresponding to half the distance between scene objects, and that, under very specific conditions, it also peaks at a spatial resolution in which the pixel size corresponds to the whole distance between scene objects. It is argued that the peak at object distance when present is an expression of the Nyquist sample rate. The presence of this peak is, hence, shown to be a function of the matching between the phase of the scene pattern and the phase of the sample grid, i.e., the image. When these phases match, a clear and distinct peak is produced on the ALV plot. The fact that the peak at half the distance consistently occurs in the ALV plot is linked to the circumstance that the sampling interval (distance between pixels) and the extent of the sampling unit (size of pixels) are equal. Hence, at twice the Nyquist sampling rate, each fundamental period of the pattern is covered by four pixels; therefore, at least one pixel is always completely embedded within one pattern element, regardless of sample scene phase. If the objects in the scene are scattered with a distance larger than their extent, the peak will be related to the size by a factor larger than 1/2. This is suggested to be the explanation to the results presented by others that the ALV plot is related to scene-object size by a factor of 1/2-3/4.

  19. Statistical Estimation of Orbital Debris Populations with a Spectrum of Object Size

    NASA Technical Reports Server (NTRS)

    Xu, Y. -l; Horstman, M.; Krisko, P. H.; Liou, J. -C; Matney, M.; Stansbery, E. G.; Stokely, C. L.; Whitlock, D.

    2008-01-01

    Orbital debris is a real concern for the safe operations of satellites. In general, the hazard of debris impact is a function of the size and spatial distributions of the debris populations. To describe and characterize the debris environment as reliably as possible, the current NASA Orbital Debris Engineering Model (ORDEM2000) is being upgraded to a new version based on new and better quality data. The data-driven ORDEM model covers a wide range of object sizes from 10 microns to greater than 1 meter. This paper reviews the statistical process for the estimation of the debris populations in the new ORDEM upgrade, and discusses the representation of large-size (greater than or equal to 1 m and greater than or equal to 10 cm) populations by SSN catalog objects and the validation of the statistical approach. Also, it presents results for the populations with sizes of greater than or equal to 3.3 cm, greater than or equal to 1 cm, greater than or equal to 100 micrometers, and greater than or equal to 10 micrometers. The orbital debris populations used in the new version of ORDEM are inferred from data based upon appropriate reference (or benchmark) populations instead of the binning of the multi-dimensional orbital-element space. This paper describes all of the major steps used in the population-inference procedure for each size-range. Detailed discussions on data analysis, parameter definition, the correlation between parameters and data, and uncertainty assessment are included.

  20. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  1. Day and night variation in chemical composition and toxicological responses of size segregated urban air PM samples in a high air pollution situation

    NASA Astrophysics Data System (ADS)

    Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.

    2015-11-01

    Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM samples. Some of the day to night difference may have been caused also by differing wind directions transporting air masses from different emission sources during the day and the night. The present findings indicate the important role of the local particle sources and atmospheric processes on the health related toxicological properties of the PM. The varying toxicological responses evoked by the PM samples showed the importance of examining various particle sizes. Especially the detected considerable toxicological activity by PM0.2 size range suggests they're attributable to combustion sources, new particle formation and atmospheric processes.

  2. Intrafamilial clustering of anti-ATLA-positive persons.

    PubMed

    Kajiyama, W; Kashiwagi, S; Hayashi, J; Nomura, H; Ikematsu, H; Okochi, K

    1986-11-01

    A total of 1,333 persons in 627 families were surveyed for presence of antibody to adult T-cell leukemia-associated antigen (anti-ATLA). Each person was classified according to the anti-ATLA status (positive for sample 1, negative for sample 2) of the head of household of his or her family. In sample 1, the sex- and age-standardized prevalence of anti-ATLA was 38.5%. This was five times as high as the standardized prevalence in sample 2 (7.8%). There were significant differences in prevalence of anti-ATLA between males in samples 1 and 2 and between females in samples 1 and 2. In every age group, prevalence in sample 1 was greater than that in sample 2 except for males aged 60-69 years. In each of four subareas, families in sample 1 had higher standardized prevalence (29.6-42.5%) than families in sample 2 (6.0-9.7%). Although crude prevalence decreased with family size in sample 1 (62.1-25.4%) as well as in sample 2, indirectly standardized prevalence was almost equal within each sample, regardless of number of family members. The degree of aggregation was independent of locality and family size. These data suggest that anti-ATLA-positive persons aggregate in family units.

  3. A Cross-Validation of easyCBM Mathematics Cut Scores in Washington State: 2009-2010 Test. Technical Report #1105

    ERIC Educational Resources Information Center

    Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    In this technical report, we document the results of a cross-validation study designed to identify optimal cut-scores for the use of the easyCBM[R] mathematics test in the state of Washington. A large sample, randomly split into two groups of roughly equal size, was used for this study. Students' performance classification on the Washington state…

  4. A Cross-Validation of easyCBM[R] Mathematics Cut Scores in Oregon: 2009-2010. Technical Report #1104

    ERIC Educational Resources Information Center

    Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    In this technical report, we document the results of a cross-validation study designed to identify optimal cut-scores for the use of the easyCBM[R] mathematics test in Oregon. A large sample, randomly split into two groups of roughly equal size, was used for this study. Students' performance classification on the Oregon state test was used as the…

  5. Battery condenser system PM2.5 emission factors and rates for cotton gins: Method 201A combination PM10 and PM2.5 sizing cyclones

    USDA-ARS?s Scientific Manuscript database

    This report is part of a project to characterize cotton gin emissions from the standpoint of stack sampling. In 2006, EPA finalized and published a more stringent standard for particulate matter with nominal diameter less than or equal to 2.5 µm (PM2.5). This created an urgent need to collect additi...

  6. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices.

    PubMed

    Harrar, Solomon W; Kong, Xiaoli

    2015-03-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results.

  7. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices

    PubMed Central

    Harrar, Solomon W.; Kong, Xiaoli

    2015-01-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861

  8. Macrozooplankton biomass in a warm-core Gulf Stream ring: Time series changes in size structure, taxonomic composition, and vertical distribution

    NASA Astrophysics Data System (ADS)

    Davis, Cabell S.; Wiebe, Peter H.

    1985-01-01

    Macrozooplankton size structure and taxonomic composition in warm-core ring 82B was examined from a time series (March, April, June) of ring center MOCNESS (1 m) samples. Size distributions of 15 major taxonomic groups were determined from length measurements digitized from silhouette photographs of the samples. Silhouette digitization allows rapid quantification of Zooplankton size structure and taxonomic composition. Length/weight regressions, determined for each taxon, were used to partition the biomass (displacement volumes) of each sample among the major taxonomic groups. Zooplankton taxonomic composition and size structure varied with depth and appeared to coincide with the hydrographic structure of the ring. In March and April, within the thermostad region of the ring, smaller herbivorous/omnivorous Zooplankton, including copepods, crustacean larvae, and euphausiids, were dominant, whereas below this region, larger carnivores, such as medusae, ctenophores, fish, and decapods, dominated. Copepods were generally dominant in most samples above 500 m. Total macrozooplankton abundance and biomass increased between March and April, primarily because of increases in herbivorous taxa, including copepods, crustacean larvae, and larvaceans. A marked increase in total macrozooplankton abundance and biomass between April and June was characterized by an equally dramatic shift from smaller herbivores (1.0-3.0 mm) in April to large herbivores (5.0-6.0 mm) and carnivores (>15 mm) in June. Species identifications made directly from the samples suggest that changes in trophic structure resulted from seeding type immigration and subsequent in situ population growth of Slope Water zooplankton species.

  9. On-chip collection of particles and cells by AC electroosmotic pumping and dielectrophoresis using asymmetric microelectrodes

    PubMed Central

    Melvin, Elizabeth M.; Moore, Brandon R.; Gilchrist, Kristin H.; Grego, Sonia; Velev, Orlin D.

    2011-01-01

    The recent development of microfluidic “lab on a chip” devices requiring sample sizes <100 μL has given rise to the need to concentrate dilute samples and trap analytes, especially for surface-based detection techniques. We demonstrate a particle collection device capable of concentrating micron-sized particles in a predetermined area by combining AC electroosmosis (ACEO) and dielectrophoresis (DEP). The planar asymmetric electrode pattern uses ACEO pumping to induce equal, quadrilateral flow directed towards a stagnant region in the center of the device. A number of system parameters affecting particle collection efficiency were investigated including electrode and gap width, chamber height, applied potential and frequency, and number of repeating electrode pairs and electrode geometry. The robustness of the on-chip collection design was evaluated against varying electrolyte concentrations, particle types, and particle sizes. These devices are amenable to integration with a variety of detection techniques such as optical evanescent waveguide sensing. PMID:22662040

  10. A comparative review of methods for comparing means using partially paired data.

    PubMed

    Guo, Beibei; Yuan, Ying

    2017-06-01

    In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.

  11. Assessment of air sampling methods and size distribution of virus-laden aerosols in outbreaks in swine and poultry farms.

    PubMed

    Alonso, Carmen; Raynor, Peter C; Goyal, Sagar; Olson, Bernard A; Alba, Anna; Davies, Peter R; Torremorell, Montserrat

    2017-05-01

    Swine and poultry viruses, such as porcine reproductive and respiratory syndrome virus (PRRSV), porcine epidemic diarrhea virus (PEDV), and highly pathogenic avian influenza virus (HPAIV), are economically important pathogens that can spread via aerosols. The reliability of methods for quantifying particle-associated viruses as well as the size distribution of aerosolized particles bearing these viruses under field conditions are not well documented. We compared the performance of 2 size-differentiating air samplers in disease outbreaks that occurred in swine and poultry facilities. Both air samplers allowed quantification of particles by size, and measured concentrations of PRRSV, PEDV, and HPAIV stratified by particle size both within and outside swine and poultry facilities. All 3 viruses were detectable in association with aerosolized particles. Proportions of positive sampling events were 69% for PEDV, 61% for HPAIV, and 8% for PRRSV. The highest virus concentrations were found with PEDV, followed by HPAIV and PRRSV. Both air collectors performed equally for the detection of total virus concentration. For all 3 viruses, higher numbers of RNA copies were associated with larger particles; however, a bimodal distribution of particles was observed in the case of PEDV and HPAIV.

  12. Sequencing chess

    NASA Astrophysics Data System (ADS)

    Atashpendar, Arshia; Schilling, Tanja; Voigtmann, Thomas

    2016-10-01

    We analyze the structure of the state space of chess by means of transition path sampling Monte Carlo simulations. Based on the typical number of moves required to transpose a given configuration of chess pieces into another, we conclude that the state space consists of several pockets between which transitions are rare. Skilled players explore an even smaller subset of positions that populate some of these pockets only very sparsely. These results suggest that the usual measures to estimate both the size of the state space and the size of the tree of legal moves are not unique indicators of the complexity of the game, but that considerations regarding the connectedness of states are equally important.

  13. Reliability of dose volume constraint inference from clinical data.

    PubMed

    Lutz, C M; Møller, D S; Hoffmann, L; Knap, M M; Alber, M

    2017-04-21

    Dose volume histogram points (DVHPs) frequently serve as dose constraints in radiotherapy treatment planning. An experiment was designed to investigate the reliability of DVHP inference from clinical data for multiple cohort sizes and complication incidence rates. The experimental background was radiation pneumonitis in non-small cell lung cancer and the DVHP inference method was based on logistic regression. From 102 NSCLC real-life dose distributions and a postulated DVHP model, an 'ideal' cohort was generated where the most predictive model was equal to the postulated model. A bootstrap and a Cohort Replication Monte Carlo (CoRepMC) approach were applied to create 1000 equally sized populations each. The cohorts were then analyzed to establish inference frequency distributions. This was applied to nine scenarios for cohort sizes of 102 (1), 500 (2) to 2000 (3) patients (by sampling with replacement) and three postulated DVHP models. The Bootstrap was repeated for a 'non-ideal' cohort, where the most predictive model did not coincide with the postulated model. The Bootstrap produced chaotic results for all models of cohort size 1 for both the ideal and non-ideal cohorts. For cohort size 2 and 3, the distributions for all populations were more concentrated around the postulated DVHP. For the CoRepMC, the inference frequency increased with cohort size and incidence rate. Correct inference rates  >[Formula: see text] were only achieved by cohorts with more than 500 patients. Both Bootstrap and CoRepMC indicate that inference of the correct or approximate DVHP for typical cohort sizes is highly uncertain. CoRepMC results were less spurious than Bootstrap results, demonstrating the large influence that randomness in dose-response has on the statistical analysis.

  14. Reliability of dose volume constraint inference from clinical data

    NASA Astrophysics Data System (ADS)

    Lutz, C. M.; Møller, D. S.; Hoffmann, L.; Knap, M. M.; Alber, M.

    2017-04-01

    Dose volume histogram points (DVHPs) frequently serve as dose constraints in radiotherapy treatment planning. An experiment was designed to investigate the reliability of DVHP inference from clinical data for multiple cohort sizes and complication incidence rates. The experimental background was radiation pneumonitis in non-small cell lung cancer and the DVHP inference method was based on logistic regression. From 102 NSCLC real-life dose distributions and a postulated DVHP model, an ‘ideal’ cohort was generated where the most predictive model was equal to the postulated model. A bootstrap and a Cohort Replication Monte Carlo (CoRepMC) approach were applied to create 1000 equally sized populations each. The cohorts were then analyzed to establish inference frequency distributions. This was applied to nine scenarios for cohort sizes of 102 (1), 500 (2) to 2000 (3) patients (by sampling with replacement) and three postulated DVHP models. The Bootstrap was repeated for a ‘non-ideal’ cohort, where the most predictive model did not coincide with the postulated model. The Bootstrap produced chaotic results for all models of cohort size 1 for both the ideal and non-ideal cohorts. For cohort size 2 and 3, the distributions for all populations were more concentrated around the postulated DVHP. For the CoRepMC, the inference frequency increased with cohort size and incidence rate. Correct inference rates  >85 % were only achieved by cohorts with more than 500 patients. Both Bootstrap and CoRepMC indicate that inference of the correct or approximate DVHP for typical cohort sizes is highly uncertain. CoRepMC results were less spurious than Bootstrap results, demonstrating the large influence that randomness in dose-response has on the statistical analysis.

  15. Predicting fractional bed load transport rates: Application of the Wilcock‐Crowe equations to a regulated gravel bed river

    USGS Publications Warehouse

    Gaeuman, David; Andrews, E.D.; Krause, Andreas; Smith, Wes

    2009-01-01

    Bed load samples from four locations in the Trinity River of northern California are analyzed to evaluate the performance of the Wilcock‐Crowe bed load transport equations for predicting fractional bed load transport rates. Bed surface particles become smaller and the fraction of sand on the bed increases with distance downstream from Lewiston Dam. The dimensionless reference shear stress for the mean bed particle size (τ*rm) is largest near the dam, but varies relatively little between the more downstream locations. The relation between τ*rm and the reference shear stresses for other size fractions is constant across all locations. Total bed load transport rates predicted with the Wilcock‐Crowe equations are within a factor of 2 of sampled transport rates for 68% of all samples. The Wilcock‐Crowe equations nonetheless consistently under‐predict the transport of particles larger than 128 mm, frequently by more than an order of magnitude. Accurate prediction of the transport rates of the largest particles is important for models in which the evolution of the surface grain size distribution determines subsequent bed load transport rates. Values of τ*rm estimated from bed load samples are up to 50% larger than those predicted with the Wilcock‐Crowe equations, and sampled bed load transport approximates equal mobility across a wider range of grain sizes than is implied by the equations. Modifications to the Wilcock‐Crowe equation for determining τ*rm and the hiding function used to scale τ*rm to other grain size fractions are proposed to achieve the best fit to observed bed load transport in the Trinity River.

  16. Modified Toxicity Probability Interval Design: A Safer and More Reliable Method Than the 3 + 3 Design for Practical Phase I Trials

    PubMed Central

    Ji, Yuan; Wang, Sue-Jane

    2013-01-01

    The 3 + 3 design is the most common choice among clinicians for phase I dose-escalation oncology trials. In recent reviews, more than 95% of phase I trials have been based on the 3 + 3 design. Given that it is intuitive and its implementation does not require a computer program, clinicians can conduct 3 + 3 dose escalations in practice with virtually no logistic cost, and trial protocols based on the 3 + 3 design pass institutional review board and biostatistics reviews quickly. However, the performance of the 3 + 3 design has rarely been compared with model-based designs in simulation studies with matched sample sizes. In the vast majority of statistical literature, the 3 + 3 design has been shown to be inferior in identifying true maximum-tolerated doses (MTDs), although the sample size required by the 3 + 3 design is often orders-of-magnitude smaller than model-based designs. In this article, through comparative simulation studies with matched sample sizes, we demonstrate that the 3 + 3 design has higher risks of exposing patients to toxic doses above the MTD than the modified toxicity probability interval (mTPI) design, a newly developed adaptive method. In addition, compared with the mTPI design, the 3 + 3 design does not yield higher probabilities in identifying the correct MTD, even when the sample size is matched. Given that the mTPI design is equally transparent, costless to implement with free software, and more flexible in practical situations, we highly encourage its adoption in early dose-escalation studies whenever the 3 + 3 design is also considered. We provide free software to allow direct comparisons of the 3 + 3 design with other model-based designs in simulation studies with matched sample sizes. PMID:23569307

  17. Estimation of regional pulmonary deposition and exposure for fumes from SMAW and GMAW mild and stainless steel consumables.

    PubMed

    Hewett, P

    1995-02-01

    The particle size distributions and bulk fume densities for mild steel and stainless steel welding fumes generated using two welding processes (shielded metal arc welding [SMAW] and gas metal arc welding [GMAW]) were used in mathematical models to estimate regional pulmonary deposition (the fraction of each fume expected to deposit in each region of the pulmonary system) and regional pulmonary exposure (the fraction of each fume expected to penetrate to each pulmonary region and would be collected by a particle size-selective sampling device). Total lung deposition for GMAW fumes was estimated at 60% greater than that of SMAW fumes. Considering both the potential for deposition and the fume specific surface areas, it is likely that for equal exposure concentrations GMAW fumes deliver nearly three times the particle surface area to the lungs as SMAW fumes. This leads to the hypothesis that exposure to GMAW fumes constitutes a greater pulmonary hazard than equal exposure to SMAW fumes. The implications of this hypothesis regarding the design of future health studies of welders is discussed.

  18. Ordered alternating binary polymer nanodroplet array by sequential spin dewetting.

    PubMed

    Bhandaru, Nandini; Das, Anuja; Salunke, Namrata; Mukherjee, Rabibrata

    2014-12-10

    We report a facile technique for fabricating an ordered array of nearly equal-sized mesoscale polymer droplets of two constituent polymers (polystyrene, PS and poly(methyl methacrylate), PMMA) arranged in an alternating manner on a topographically patterned substrate. The self-organized array of binary polymers is realized by sequential spin dewetting. First, a dilute solution of PMMA is spin-dewetted on a patterned substrate, resulting in an array of isolated PMMA droplets arranged along the substrate grooves due to self-organization during spin coating itself. The sample is then silanized with octadecyltrichlorosilane (OTS), and subsequently, a dilute solution of PS is spin-coated on to it, which also undergoes spin dewetting. The spin-dewetted PS drops having a size nearly equal to the pre-existing PMMA droplets position themselves between two adjacent PMMA drops under appropriate conditions, forming an alternating binary polymer droplet array. The alternating array formation takes place for a narrow range of solution concentration for both the polymers and depends on the geometry of the substrate. The size of the droplets depends on the extent of confinement, and droplets as small as 100 nm can be obtained by this method, on a suitable template. The findings open up the possibility of creating novel surfaces having ordered multimaterial domains with a potential multifunctional capability.

  19. A USANS/SANS study of the accessibility of pores in the Barnett Shale to methane and water

    USGS Publications Warehouse

    Ruppert, Leslie F.; Sakurovs, Richard; Blach, Tomasz P.; He, Lilin; Melnichenko, Yuri B.; Mildner, David F.; Alcantar-Lopez, Leo

    2013-01-01

    Shale is an increasingly important source of natural gas in the United States. The gas is held in fine pores that need to be accessed by horizontal drilling and hydrofracturing techniques. Understanding the nature of the pores may provide clues to making gas extraction more efficient. We have investigated two Mississippian Barnett Shale samples, combining small-angle neutron scattering (SANS) and ultrasmall-angle neutron scattering (USANS) to determine the pore size distribution of the shale over the size range 10 nm to 10 μm. By adding deuterated methane (CD4) and, separately, deuterated water (D2O) to the shale, we have identified the fraction of pores that are accessible to these compounds over this size range. The total pore size distribution is essentially identical for the two samples. At pore sizes >250 nm, >85% of the pores in both samples are accessible to both CD4 and D2O. However, differences in accessibility to CD4 are observed in the smaller pore sizes (~25 nm). In one sample, CD4 penetrated the smallest pores as effectively as it did the larger ones. In the other sample, less than 70% of the smallest pores (4, but they were still largely penetrable by water, suggesting that small-scale heterogeneities in methane accessibility occur in the shale samples even though the total porosity does not differ. An additional study investigating the dependence of scattered intensity with pressure of CD4 allows for an accurate estimation of the pressure at which the scattered intensity is at a minimum. This study provides information about the composition of the material immediately surrounding the pores. Most of the accessible (open) pores in the 25 nm size range can be associated with either mineral matter or high reflectance organic material. However, a complementary scanning electron microscopy investigation shows that most of the pores in these shale samples are contained in the organic components. The neutron scattering results indicate that the pores are not equally proportioned in the different constituents within the shale. There is some indication from the SANS results that the composition of the pore-containing material varies with pore size; the pore size distribution associated with mineral matter is different from that associated with organic phases.

  20. Reproductive success and heavy metal contamination in Rhode Island common terns

    USGS Publications Warehouse

    Custer, T.W.; Franson, J.C.; Moore, John F.; Myers, J.E.

    1986-01-01

    Common tern cIutch size, reproductive success and growth of young recorded from an abandoned barge on the Providence River, an area of heavy metal contamination, were equal to, or greater than, .from less contaminated areas. Concentrations of copper and zinc were higher in livers of nestling terns from the Providence River than from other, less contaminated, areas. However, concentrations of magnesium, manganese, and iron and the frequency of nickel were equal, or lower, at Providence than other, less contaminated, locations. Among-colony trends in residues of copper, zinc and nickel in prey samples were similar to trends .found in nestling livers. Uric acid concentrations in nestling blood were twice as high in the Providence River than another colony and may have resulted from moderate levels of chromium in the diet.

  1. Ultra fine grained Ti prepared by severe plastic deformation

    NASA Astrophysics Data System (ADS)

    Lukáč, F.; Čížek, J.; Knapp, J.; Procházka, I.; Zháňal, P.; Islamgaliev, R. K.

    2016-01-01

    The positron annihilation spectroscopy was employed for characterisation of defects in pure Ti with ultra fine grained (UFG) structure. UFG Ti samples were prepared by two techniques based on severe plastic deformation (SPD): (i) high pressure torsion (HPT) and (ii) equal channel angular pressing (ECAP). Although HPT is the most efficient technique for grain refinement, the size of HPT-deformed specimens is limited. On the other hand, ECAP is less efficient in grain refinement but enables to produce larger samples more suitable for industrial applications. Characterisation of defects by positron annihilation spectroscopy was accompanied by hardness testing in order to monitor the development of mechanical properties of UFG Ti.

  2. Age, sex, reproduction, and spatial organization of lynxes colonizing northeastern Minnesota

    USGS Publications Warehouse

    Mech, L.D.

    1980-01-01

    From 1972 through 1978, lynxes (Felis lynx) emigrating from Canada were studied in northeastern Minnesota. Fourteen individuals were radio-tracked, 8 wefe ear-tagged, and 49 carcasses were examined. Sex ratios of the samples were equal during the first years of the study, but females predominated later. At least half of the radiotagged lynxes were killed by humans; no natural mortality was detected. Home range sizes ranged from 51 to 122 km2 for females and 145 to 243 km2 for males, up to 10 times the sizes of those reported by other workers. Ranges of females tended to overlap. Males and females appeared to be segregated in the population.

  3. Influence of process conditions during impulsed electrostatic droplet formation on size distribution of hydrogel beads.

    PubMed

    Lewińska, Dorota; Rosiński, Stefan; Weryński, Andrzej

    2004-02-01

    In the medical applications of microencapsulation of living cells there are strict requirements concerning the high size uniformity and the optimal diameter, the latter dependent on the kind of therapeutic application, of manufactured gel beads. The possibility of manufacturing small size gel bead samples (diameter 300 microm and below) with a low size dispersion (less than 10%), using an impulsed voltage droplet generator, was examined in this work. The main topic was the investigation of the influence of values of electric parameters (voltage U, impulse time tau and impulse frequency f) on the quality of obtained droplets. It was concluded that, owing to the implementation of the impulse mode and regulation of tau and f values, it is possible to work in a controlled manner in the jet flow regime (U> critical voltage UC). It is also possible to obtain uniform bead samples with the average diameter, deff, significantly lower than the nozzle inner diameter dI (bead diameters 0.12-0.25 mm by dI equal to 0.3 mm, size dispersion 5-7%). Alterations of the physical parameters of the process (polymer solution physico-chemical properties, flow rate, distance between nozzle and gellifying bath) enable one to manufacture uniform gel beads in the wide range of diameters using a single nozzle.

  4. Application of Response Surface Methods To Determine Conditions for Optimal Genomic Prediction

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2017-01-01

    An epistatic genetic architecture can have a significant impact on prediction accuracies of genomic prediction (GP) methods. Machine learning methods predict traits comprised of epistatic genetic architectures more accurately than statistical methods based on additive mixed linear models. The differences between these types of GP methods suggest a diagnostic for revealing genetic architectures underlying traits of interest. In addition to genetic architecture, the performance of GP methods may be influenced by the sample size of the training population, the number of QTL, and the proportion of phenotypic variability due to genotypic variability (heritability). Possible values for these factors and the number of combinations of the factor levels that influence the performance of GP methods can be large. Thus, efficient methods for identifying combinations of factor levels that produce most accurate GPs is needed. Herein, we employ response surface methods (RSMs) to find the experimental conditions that produce the most accurate GPs. We illustrate RSM with an example of simulated doubled haploid populations and identify the combination of factors that maximize the difference between prediction accuracies of best linear unbiased prediction (BLUP) and support vector machine (SVM) GP methods. The greatest impact on the response is due to the genetic architecture of the population, heritability of the trait, and the sample size. When epistasis is responsible for all of the genotypic variance and heritability is equal to one and the sample size of the training population is large, the advantage of using the SVM method vs. the BLUP method is greatest. However, except for values close to the maximum, most of the response surface shows little difference between the methods. We also determined that the conditions resulting in the greatest prediction accuracy for BLUP occurred when genetic architecture consists solely of additive effects, and heritability is equal to one. PMID:28720710

  5. Type I error probabilities based on design-stage strategies with applications to noninferiority trials.

    PubMed

    Rothmann, Mark

    2005-01-01

    When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.

  6. Comparability of river suspended-sediment sampling and laboratory analysis methods

    USGS Publications Warehouse

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  7. The structure and mechanical properties of parts elaborated by direct laser deposition 316L stainless steel powder obtained in various ways

    NASA Astrophysics Data System (ADS)

    Loginova, I. S.; Solonin, A. N.; Prosviryakov, A. S.; Adisa, S. B.; Khalil, A. M.; Bykovskiy, D. P.; Petrovskiy, V. N.

    2017-12-01

    In this work the morphology, the size and the chemical composition of the powders of steel 316L received by the two methods was studied: fusion dispersion by a gas stream and reduction of metal chlorides with the subsequent plasma atomization of the received powder particles. The powder particles received by the first method have a spherical shape (aspect ratio 1,0-1,2) with an average size of 77 μm and are characterized by the absence of internal porosity. Particles of the powder received by the second method also have a spherical shape and faultless structure, however, their chemical composition may vary in different particles. The average size of particles is 32 μm. Though the obtained powders had different properties, the experimental samples received by DLD technology demonstrated by equally high durability (Ultimate strength is 623±5 and of 623±18 MPa respectively) and plasticity (38 and 41% respectively). It is established that mechanical properties of DLD samples increase for 7-10% after treatment of the surface.

  8. Testing homogeneity of proportion ratios for stratified correlated bilateral data in two-arm randomized clinical trials.

    PubMed

    Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai

    2014-11-10

    Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Using recurrent neural networks for adaptive communication channel equalization.

    PubMed

    Kechriotis, G; Zervas, E; Manolakos, E S

    1994-01-01

    Nonlinear adaptive filters based on a variety of neural network models have been used successfully for system identification and noise-cancellation in a wide class of applications. An important problem in data communications is that of channel equalization, i.e., the removal of interferences introduced by linear or nonlinear message corrupting mechanisms, so that the originally transmitted symbols can be recovered correctly at the receiver. In this paper we introduce an adaptive recurrent neural network (RNN) based equalizer whose small size and high performance makes it suitable for high-speed channel equalization. We propose RNN based structures for both trained adaptation and blind equalization, and we evaluate their performance via extensive simulations for a variety of signal modulations and communication channel models. It is shown that the RNN equalizers have comparable performance with traditional linear filter based equalizers when the channel interferences are relatively mild, and that they outperform them by several orders of magnitude when either the channel's transfer function has spectral nulls or severe nonlinear distortion is present. In addition, the small-size RNN equalizers, being essentially generalized IIR filters, are shown to outperform multilayer perceptron equalizers of larger computational complexity in linear and nonlinear channel equalization cases.

  10. From metamorphosis to maturity in complex life cycles: equal performance of different juvenile life history pathways.

    PubMed

    Schmidt, Benedikt R; Hödl, Walter; Schaub, Michael

    2012-03-01

    Performance in one stage of a complex life cycle may affect performance in the subsequent stage. Animals that start a new stage at a smaller size than conspecifics may either always remain smaller or they may be able to "catch up" through plasticity, usually elevated growth rates. We study how size at and date of metamorphosis affected subsequent performance in the terrestrial juvenile stage and lifetime fitness of spadefoot toads (Pelobates fuscus). We analyzed capture-recapture data of > 3000 individuals sampled during nine years with mark-recapture models to estimate first-year juvenile survival probabilities and age-specific first-time breeding probabilities of toads, followed by model selection to assess whether these probabilities were correlated with size at and date of metamorphosis. Males attained maturity after two years, whereas females reached maturity 2-4 years after metamorphosis. Age at maturity was weakly correlated with metamorphic traits. In both sexes, first-year juvenile survival depended positively on date of metamorphosis and, in males, also negatively on size at metamorphosis. In males, toads that metamorphosed early at a small size had the highest probability to reach maturity. However, because very few toadlets metamorphosed early, the vast majority of male metamorphs had a very similar probability to reach maturity. A matrix projection model constructed for females showed that different juvenile life history pathways resulted in similar lifetime fitness. We found that the effects of date of and size at metamorphosis on different juvenile traits cancelled each other out such that toads that were small or large at metamorphosis had equal performance. Because the costs and benefits of juvenile life history pathways may also depend on population fluctuations, ample phenotypic variation in life history traits may be maintained.

  11. Body Size Correlates with Fertilization Success but not Gonad Size in Grass Goby Territorial Males

    PubMed Central

    Pujolar, Jose Martin; Locatello, Lisa; Zane, Lorenzo; Mazzoldi, Carlotta

    2012-01-01

    In fish species with alternative male mating tactics, sperm competition typically occurs when small males that are unsuccessful in direct contests steal fertilization opportunities from large dominant males. In the grass goby Zosterisessor ophiocephalus, large territorial males defend and court females from nest sites, while small sneaker males obtain matings by sneaking into nests. Parentage assignment of 688 eggs from 8 different nests sampled in the 2003–2004 breeding season revealed a high level of sperm competition. Fertilization success of territorial males was very high but in all nests sneakers also contributed to the progeny. In territorial males, fertilization success correlated positively with male body size. Gonadal investment was explored in a sample of 126 grass gobies collected during the period 1995–1996 in the same area (61 territorial males and 65 sneakers). Correlation between body weight and testis weight was positive and significant for sneaker males, while correlation was virtually equal to zero in territorial males. That body size in territorial males is correlated with fertilization success but not gonad size suggests that males allocate much more energy into growth and relatively little into sperm production once the needed size to become territorial is attained. The increased paternity of larger territorial males might be due to a more effective defense of the nest in comparison with smaller territorial males. PMID:23056415

  12. Body size correlates with fertilization success but not gonad size in grass goby territorial males.

    PubMed

    Pujolar, Jose Martin; Locatello, Lisa; Zane, Lorenzo; Mazzoldi, Carlotta

    2012-01-01

    In fish species with alternative male mating tactics, sperm competition typically occurs when small males that are unsuccessful in direct contests steal fertilization opportunities from large dominant males. In the grass goby Zosterisessor ophiocephalus, large territorial males defend and court females from nest sites, while small sneaker males obtain matings by sneaking into nests. Parentage assignment of 688 eggs from 8 different nests sampled in the 2003-2004 breeding season revealed a high level of sperm competition. Fertilization success of territorial males was very high but in all nests sneakers also contributed to the progeny. In territorial males, fertilization success correlated positively with male body size. Gonadal investment was explored in a sample of 126 grass gobies collected during the period 1995-1996 in the same area (61 territorial males and 65 sneakers). Correlation between body weight and testis weight was positive and significant for sneaker males, while correlation was virtually equal to zero in territorial males. That body size in territorial males is correlated with fertilization success but not gonad size suggests that males allocate much more energy into growth and relatively little into sperm production once the needed size to become territorial is attained. The increased paternity of larger territorial males might be due to a more effective defense of the nest in comparison with smaller territorial males.

  13. Learning Rate Updating Methods Applied to Adaptive Fuzzy Equalizers for Broadband Power Line Communications

    NASA Astrophysics Data System (ADS)

    Ribeiro, Moisés V.

    2004-12-01

    This paper introduces adaptive fuzzy equalizers with variable step size for broadband power line (PL) communications. Based on delta-bar-delta and local Lipschitz estimation updating rules, feedforward, and decision feedback approaches, we propose singleton and nonsingleton fuzzy equalizers with variable step size to cope with the intersymbol interference (ISI) effects of PL channels and the hardness of the impulse noises generated by appliances and nonlinear loads connected to low-voltage power grids. The computed results show that the convergence rates of the proposed equalizers are higher than the ones attained by the traditional adaptive fuzzy equalizers introduced by J. M. Mendel and his students. Additionally, some interesting BER curves reveal that the proposed techniques are efficient for mitigating the above-mentioned impairments.

  14. Investigation of the immunogenicity of different types of aggregates of a murine monoclonal antibody in mice.

    PubMed

    Freitag, Angelika J; Shomali, Maliheh; Michalakis, Stylianos; Biel, Martin; Siedler, Michael; Kaymakcalan, Zehra; Carpenter, John F; Randolph, Theodore W; Winter, Gerhard; Engert, Julia

    2015-02-01

    The potential contribution of protein aggregates to the unwanted immunogenicity of protein pharmaceuticals is a major concern. In the present study a murine monoclonal antibody was utilized to study the immunogenicity of different types of aggregates in mice. Samples containing defined types of aggregates were prepared by processes such as stirring, agitation, exposure to ultraviolet (UV) light and exposure to elevated temperatures. Aggregates were analyzed by size-exclusion chromatography, light obscuration, turbidimetry, infrared (IR) spectroscopy and UV spectroscopy. Samples were separated into fractions based on aggregate size by asymmetrical flow field-flow fractionation or by centrifugation. Samples containing different types and sizes of aggregates were subsequently administered to C57BL/6 J and BALB/c mice, and serum was analyzed for the presence of anti-IgG1, anti-IgG2a, anti-IgG2b and anti-IgG3 antibodies. In addition, the pharmacokinetic profile of the murine antibody was investigated. In this study, samples containing high numbers of different types of aggregates were administered in order to challenge the in vivo system. The magnitude of immune response depends on the nature of the aggregates. The most immunogenic aggregates were of relatively large and insoluble nature, with perturbed, non-native structures. This study shows that not all protein drug aggregates are equally immunogenic.

  15. Adaptive frequency-domain equalization in digital coherent optical receivers.

    PubMed

    Faruk, Md Saifuddin; Kikuchi, Kazuro

    2011-06-20

    We propose a novel frequency-domain adaptive equalizer in digital coherent optical receivers, which can reduce computational complexity of the conventional time-domain adaptive equalizer based on finite-impulse-response (FIR) filters. The proposed equalizer can operate on the input sequence sampled by free-running analog-to-digital converters (ADCs) at the rate of two samples per symbol; therefore, the arbitrary initial sampling phase of ADCs can be adjusted so that the best symbol-spaced sequence is produced. The equalizer can also be configured in the butterfly structure, which enables demultiplexing of polarization tributaries apart from equalization of linear transmission impairments. The performance of the proposed equalization scheme is verified by 40-Gbits/s dual-polarization quadrature phase-shift keying (QPSK) transmission experiments.

  16. The structure of Turkish trait-descriptive adjectives.

    PubMed

    Somer, O; Goldberg, L R

    1999-03-01

    This description of the Turkish lexical project reports some initial findings on the structure of Turkish personality-related variables. In addition, it provides evidence on the effects of target evaluative homogeneity vs. heterogeneity (e.g., samples of well-liked target individuals vs. samples of both liked and disliked targets) on the resulting factor structures, and thus it provides a first test of the conclusions reached by D. Peabody and L. R. Goldberg (1989) using English trait terms. In 2 separate studies, and in 2 types of data sets, clear versions of the Big Five factor structure were found. And both studies replicated and extended the findings of Peabody and Goldberg; virtually orthogonal factors of relatively equal size were found in the homogeneous samples, and a more highly correlated set of factors with relatively large Agreeableness and Conscientiousness dimensions was found in the heterogeneous samples.

  17. Systems and methods for analyzing liquids under vacuum

    DOEpatents

    Yu, Xiao-Ying; Yang, Li; Cowin, James P.; Iedema, Martin J.; Zhu, Zihua

    2013-10-15

    Systems and methods for supporting a liquid against a vacuum pressure in a chamber can enable analysis of the liquid surface using vacuum-based chemical analysis instruments. No electrical or fluid connections are required to pass through the chamber walls. The systems can include a reservoir, a pump, and a liquid flow path. The reservoir contains a liquid-phase sample. The pump drives flow of the sample from the reservoir, through the liquid flow path, and back to the reservoir. The flow of the sample is not substantially driven by a differential between pressures inside and outside of the liquid flow path. An aperture in the liquid flow path exposes a stable portion of the liquid-phase sample to the vacuum pressure within the chamber. The radius, or size, of the aperture is less than or equal to a critical value required to support a meniscus of the liquid-phase sample by surface tension.

  18. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs.

    PubMed

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the simple difference and covariance-adjusted scores were more or less powerful to detect treatment effects when relaxing certain assumptions made by Huck and McLean (1975). Four factors were manipulated in the design including sample size, normality of the pretest and posttest distributions, the correlation between pretest and posttest, and posttest variance. A 5 × 5 × 4 × 3 mostly crossed design was run with 1,000 replications per condition, resulting in 226,000 unique samples. The gain score was nearly as powerful as the covariance-adjusted score when pretest and posttest variances were equal, and as powerful in fan-spread growth conditions; thus, under certain circumstances the gain score could be used in two-wave randomized experimental designs.

  19. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    PubMed Central

    Petscher, Yaacov; Schatschneider, Christopher

    2015-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the simple difference and covariance-adjusted scores were more or less powerful to detect treatment effects when relaxing certain assumptions made by Huck and McLean (1975). Four factors were manipulated in the design including sample size, normality of the pretest and posttest distributions, the correlation between pretest and posttest, and posttest variance. A 5 × 5 × 4 × 3 mostly crossed design was run with 1,000 replications per condition, resulting in 226,000 unique samples. The gain score was nearly as powerful as the covariance-adjusted score when pretest and posttest variances were equal, and as powerful in fan-spread growth conditions; thus, under certain circumstances the gain score could be used in two-wave randomized experimental designs. PMID:26379310

  20. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    PubMed

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  1. Fair Shares, Matey, or Walk the Plank

    ERIC Educational Resources Information Center

    Wilson, P. Holt; Myers, Marrielle; Edgington, Cyndi; Confrey, Jere

    2012-01-01

    Whether sharing a collection of toys among friends or a pie for dessert, children as young as kindergarten age are keen on making sure that everyone gets their "fair share." In the classroom, fair-sharing activities call for creating equal-size groups from a collection of objects or creating equal-size parts of a whole and are generally used by…

  2. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    PubMed

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    PubMed

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  4. A weighted generalized score statistic for comparison of predictive values of diagnostic tests

    PubMed Central

    Kosinski, Andrzej S.

    2013-01-01

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations which are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we present, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic which incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, it always reduces to the score statistic in the independent samples situation, and it preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the weighted generalized score test statistic in a general GEE setting. PMID:22912343

  5. [Ophthalmopathy caused by precision work of sorters of precious stones].

    PubMed

    Feĭgin, A A; Korniushina, T A; Rozenblium, Iu Z

    1992-01-01

    A total of 440 female workers aged 17 to 50, whose work records ranged from 1 to 29 years, engaged in grading the diamonds by the color, shape, size, and quality (a total of 24 to 33 positions) were examined. A random sample of 110 subjects was singled out; this sample was divided into 2 equal groups with or without asthenopic complaints. The refraction, absolute accommodation volume, and relative accommodation reserves were under study. Comparison of these two groups of workers has shown that subjects with precision ophthalmopathy show a trend to a higher incidence of myopia, reduction of the absolute accommodation volume by 1.6 diopters and of the relative accommodation reserves by 1.3 diopters.

  6. Reference allocations and use of a disparity measure to inform the design of allocation funding formulas in public health programs.

    PubMed

    Buehler, James W; Bernet, Patrick M; Ogden, Lydia L

    2012-01-01

    Funding formulas are commonly used by federal agencies to allocate program funds to states. As one approach to evaluating differences in allocations resulting from alternative formula calculations, we propose the use of a measure derived from the Gini index to summarize differences in allocations relative to 2 referent allocations: one based on equal per-capita funding across states and another based on equal funding per person living in poverty, which we define as the "proportionality of allocation" (PA). These referents reflect underlying values that often shape formula-based allocations for public health programs. The size of state populations serves as a general proxy for the amount of funding needed to support programs across states. While the size of state populations living in poverty is correlated with overall population size, allocations based on states' shares of the national population living in poverty reflect variations in funding need shaped by the association between poverty and multiple adverse health outcomes. The PA measure is a summary of the degree of dispersion in state-specific allocations relative to the referent allocations and provides a quick assessment of the impact of selecting alternative funding formula designs. We illustrate the PA values by adjusting a sample allocation, using various measures of the salary costs and in-state wealth, which might modulate states' needs for federal funding.

  7. Constant gradient PFG sequence and automated cumulant analysis for quantifying dispersion in flow through porous media.

    PubMed

    Scheven, U M

    2013-12-01

    This paper describes a new variant of established stimulated echo pulse sequences, and an analytical method for determining diffusion or dispersion coefficients for Gaussian or non-Gaussian displacement distributions. The unipolar displacement encoding PFGSTE sequence uses trapezoidal gradient pulses of equal amplitude g and equal ramp rates throughout while sampling positive and negative halves of q-space. Usefully, the equal gradient amplitudes and gradient ramp rates help to reduce the impact of experimental artefacts caused by residual amplifier transients, eddy currents, or ferromagnetic hysteresis in components of the NMR magnet. The pulse sequence was validated with measurements of diffusion in water and of dispersion in flow through a packing of spheres. The analytical method introduced here permits the robust determination of the variance of non-Gaussian, dispersive displacement distributions. The noise sensitivity of the analytical method is shown to be negligible, using a demonstration experiment with a non-Gaussian longitudinal displacement distribution, measured on flow through a packing of mono-sized spheres. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Extracting samples of high diversity from thematic collections of large gene banks using a genetic-distance based approach

    PubMed Central

    2010-01-01

    Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152

  9. Protection of obstetric dimensions in a small-bodied human sample.

    PubMed

    Kurki, Helen K

    2007-08-01

    In human females, the bony pelvis must find a balance between being small (narrow) for efficient bipedal locomotion, and being large to accommodate a relatively large newborn. It has been shown that within a given population, taller/larger-bodied women have larger pelvic canals. This study investigates whether in a population where small body size is the norm, pelvic geometry (size and shape), on average, shows accommodation to protect the obstetric canal. Osteometric data were collected from the pelves, femora, and clavicles (body size indicators) of adult skeletons representing a range of adult body size. Samples include Holocene Later Stone Age (LSA) foragers from southern Africa (n = 28 females, 31 males), Portuguese from the Coimbra-identified skeletal collection (CISC) (n = 40 females, 40 males) and European-Americans from the Hamann-Todd osteological collection (H-T) (n = 40 females, 40 males). Patterns of sexual dimorphism are similar in the samples. Univariate and multivariate analyses of raw and Mosimann shape-variables indicate that compared to the CISC and H-T females, the LSA females have relatively large midplane and outlet canal planes (particularly posterior and A-P lengths). The LSA males also follow this pattern, although with absolutely smaller pelves in multivariate space. The CISC females, who have equally small stature, but larger body mass, do not show the same type of pelvic canal size and shape accommodation. The results suggest that adaptive allometric modeling in at least some small-bodied populations protects the obstetric canal. These findings support the use of population-specific attributes in the clinical evaluation of obstetric risk. (c) 2007 Wiley-Liss, Inc.

  10. 33 CFR Appendix A to Part 157 - Damage Assumptions, Hypothetical Outflows, and Cargo Tank Size and Arrangements

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... in section 2 of this Appendix; W i for a segregated ballast tank may be taken equal to zero; C i...; C i for a segregated ballast tank may be taken equal to zero; EC15NO91.180 when b i is equal to or greater than t c, Ki is equal to zero; EC15NO91.181 when h i is equal to or greater than v s, Z i is equal...

  11. 33 CFR Appendix A to Part 157 - Damage Assumptions, Hypothetical Outflows, and Cargo Tank Size and Arrangements

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... section 2 of this Appendix; W i for a segregated ballast tank may be taken equal to zero; C i=Volume of a... segregated ballast tank may be taken equal to zero; EC15NO91.180 when b i is equal to or greater than t c, Ki is equal to zero; EC15NO91.181 when h i is equal to or greater than v s, Z i is equal to zero; b i...

  12. 33 CFR Appendix A to Part 157 - Damage Assumptions, Hypothetical Outflows, and Cargo Tank Size and Arrangements

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... section 2 of this Appendix; W i for a segregated ballast tank may be taken equal to zero; C i=Volume of a... segregated ballast tank may be taken equal to zero; EC15NO91.180 when b i is equal to or greater than t c, Ki is equal to zero; EC15NO91.181 when h i is equal to or greater than v s, Z i is equal to zero; b i...

  13. 33 CFR Appendix A to Part 157 - Damage Assumptions, Hypothetical Outflows, and Cargo Tank Size and Arrangements

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... section 2 of this Appendix; W i for a segregated ballast tank may be taken equal to zero; C i=Volume of a... segregated ballast tank may be taken equal to zero; EC15NO91.180 when b i is equal to or greater than t c, Ki is equal to zero; EC15NO91.181 when h i is equal to or greater than v s, Z i is equal to zero; b i...

  14. Micromega/IR: Design and status of a near-infrared spectral microscope for in situ analysis of Mars samples

    NASA Astrophysics Data System (ADS)

    Leroi, Vaitua; Bibring, Jean-Pierre; Berthe, Michel

    2009-07-01

    MicrOmega is an ultra miniaturized spectral microscope for in situ analysis of samples. It is composed of 2 microscopes; one with a spatial sampling less or equal to 4 μm, working in 4 colors in the visible range: MicrOmega/VIS, and a NIR hyperspectral microscope working in the spectral range 0.9-4 μm with a spatial sampling of 20 μm per pixel: MicrOmega/IR (described in this paper). MicrOmega/IR illuminates and images samples a few mm in size and acquires the NIR spectrum of each resolved pixel in up to 320 contiguous spectral channels. The goal of this instrument is to analyze in situ the composition of collected samples at almost their grain size scale, in a non-destructive way. With the chosen spectral range and resolution, a wide variety of constituents can be identified: minerals, such as pyroxene and olivine, ferric oxides, hydrated phyllosilicates, sulfates and carbonates and ices and organics. The composition of the various phases within a given sample is a critical record of its formation and evolution. Coupled to the mapping information, it provides unique clues to describe the history of the parent body (planet, satellite and small body). In particular, the capability to identify hydrated grains and to characterize their adjacent phases has a huge potential in the search for possible bio-relics.

  15. A comparative analysis of support vector machines and extreme learning machines.

    PubMed

    Liu, Xueyi; Gao, Chuanhou; Li, Ping

    2012-09-01

    The theory of extreme learning machines (ELMs) has recently become increasingly popular. As a new learning algorithm for single-hidden-layer feed-forward neural networks, an ELM offers the advantages of low computational cost, good generalization ability, and ease of implementation. Hence the comparison and model selection between ELMs and other kinds of state-of-the-art machine learning approaches has become significant and has attracted many research efforts. This paper performs a comparative analysis of the basic ELMs and support vector machines (SVMs) from two viewpoints that are different from previous works: one is the Vapnik-Chervonenkis (VC) dimension, and the other is their performance under different training sample sizes. It is shown that the VC dimension of an ELM is equal to the number of hidden nodes of the ELM with probability one. Additionally, their generalization ability and computational complexity are exhibited with changing training sample size. ELMs have weaker generalization ability than SVMs for small sample but can generalize as well as SVMs for large sample. Remarkably, great superiority in computational speed especially for large-scale sample problems is found in ELMs. The results obtained can provide insight into the essential relationship between them, and can also serve as complementary knowledge for their past experimental and theoretical comparisons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. The US Geological Survey, digital spectral reflectance library: version 1: 0.2 to 3.0 microns

    NASA Technical Reports Server (NTRS)

    Clark, Roger N.; Swayze, Gregg A.; King, Trude V. V.; Gallagher, Andrea J.; Calvin, Wendy M.

    1993-01-01

    We have developed a digital reflectance spectral library, with management and spectral analysis software. The library includes 500 spectra of 447 samples (some samples include a series of grain sizes) measured from approximately 0.2 to 3.0 microns. The spectral resolution (Full Width Half Maximum) of the reflectance data is less than or equal to 4 nm in the visible (0.2-0.8 microns) and less than or equal 10 nm in the NIR (0.8-2.35 microns). All spectra were corrected to absolute reflectance using an NBS Halon standard. Library management software lets users search on parameters (e.g. chemical formulae, chemical analyses, purity of samples, mineral groups, etc.) as well as spectral features. Minerals from sulfide, oxide, hydroxide, halide, carbonate, nitrate, borate, phosphate, and silicate groups are represented. X-ray and chemical analyses are tabulated for many of the entries, and all samples have been evaluated for spectral purity. The library also contains end and intermediate members for the olivine, garnet, scapolite, montmorillonite, muscovite, jarosite, and alunite solid-solution series. We have included representative spectra of H2O ice, kerogen, ammonium-bearing minerals, rare-earth oxides, desert varnish coatings, kaolinite crystallinity series, kaolinite-smectite series, zeolite series, and an extensive evaporite series. Because of the importance of vegetation to climate-change studies we have include 17 spectra of tree leaves, bushes, and grasses.

  17. Variability in body size and shape of UK offshore workers: A cluster analysis approach.

    PubMed

    Stewart, Arthur; Ledingham, Robert; Williams, Hector

    2017-01-01

    Male UK offshore workers have enlarged dimensions compared with UK norms and knowledge of specific sizes and shapes typifying their physiques will assist a range of functions related to health and ergonomics. A representative sample of the UK offshore workforce (n = 588) underwent 3D photonic scanning, from which 19 extracted dimensional measures were used in k-means cluster analysis to characterise physique groups. Of the 11 resulting clusters four somatotype groups were expressed: one cluster was muscular and lean, four had greater muscularity than adiposity, three had equal adiposity and muscularity and three had greater adiposity than muscularity. Some clusters appeared constitutionally similar to others, differing only in absolute size. These cluster centroids represent an evidence-base for future designs in apparel and other applications where body size and proportions affect functional performance. They also constitute phenotypic evidence providing insight into the 'offshore culture' which may underpin the enlarged dimensions of offshore workers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Using an experimental manipulation to determine the effectiveness of a stock enhancement program

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2015-01-01

    We used an experimental manipulation to determine the impact of stocking 178 mm channel catfish Ictalurus punctatus in six impoundments. The study design consisted of equal numbers (two) of control, ceased-stock, and stocked treatments that were sampled one year before and two years after stocking. Relative abundance, growth, size structure, and average weight significantly changed over time based on samples collected with hoop nets. Catch rates decreased at both ceased-stock lakes and increased for one stocked lake, while growth rates changed for at least one ceased-stock and stocked lake. The average weight of channel catfish in the ceased-stock treatment increased by 6% and 25%, whereas weight decreased by 28% and 78% in both stocked lakes. The variability in observed responses between lakes in both ceased-stock and stocked treatments indicates that a one-size-fits-all stocking agenda is impractical, suggesting lake specific and density-dependent mechanisms affect channel catfish population dynamics.

  19. Children's accuracy of portion size estimation using digital food images: effects of interface design and size of image on computer screen.

    PubMed

    Baranowski, Tom; Baranowski, Janice C; Watson, Kathleen B; Martin, Shelby; Beltran, Alicia; Islam, Noemi; Dadabhoy, Hafza; Adame, Su-heyla; Cullen, Karen; Thompson, Debbe; Buday, Richard; Subar, Amy

    2011-03-01

    To test the effect of image size and presence of size cues on the accuracy of portion size estimation by children. Children were randomly assigned to seeing images with or without food size cues (utensils and checked tablecloth) and were presented with sixteen food models (foods commonly eaten by children) in varying portion sizes, one at a time. They estimated each food model's portion size by selecting a digital food image. The same food images were presented in two ways: (i) as small, graduated portion size images all on one screen or (ii) by scrolling across large, graduated portion size images, one per sequential screen. Laboratory-based with computer and food models. Volunteer multi-ethnic sample of 120 children, equally distributed by gender and ages (8 to 13 years) in 2008-2009. Average percentage of correctly classified foods was 60·3 %. There were no differences in accuracy by any design factor or demographic characteristic. Multiple small pictures on the screen at once took half the time to estimate portion size compared with scrolling through large pictures. Larger pictures had more overestimation of size. Multiple images of successively larger portion sizes of a food on one computer screen facilitated quicker portion size responses with no decrease in accuracy. This is the method of choice for portion size estimation on a computer.

  20. The use of generalized linear models and generalized estimating equations in bioarchaeological studies.

    PubMed

    Nikita, Efthymia

    2014-03-01

    The current article explores whether the application of generalized linear models (GLM) and generalized estimating equations (GEE) can be used in place of conventional statistical analyses in the study of ordinal data that code an underlying continuous variable, like entheseal changes. The analysis of artificial data and ordinal data expressing entheseal changes in archaeological North African populations gave the following results. Parametric and nonparametric tests give convergent results particularly for P values <0.1, irrespective of whether the underlying variable is normally distributed or not under the condition that the samples involved in the tests exhibit approximately equal sizes. If this prerequisite is valid and provided that the samples are of equal variances, analysis of covariance may be adopted. GLM are not subject to constraints and give results that converge to those obtained from all nonparametric tests. Therefore, they can be used instead of traditional tests as they give the same amount of information as them, but with the advantage of allowing the study of the simultaneous impact of multiple predictors and their interactions and the modeling of the experimental data. However, GLM should be replaced by GEE for the study of bilateral asymmetry and in general when paired samples are tested, because GEE are appropriate for correlated data. Copyright © 2013 Wiley Periodicals, Inc.

  1. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    PubMed

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  2. Young Women’s Dynamic Family Size Preferences in the Context of Transitioning Fertility

    PubMed Central

    Yeatman, Sara; Sennott, Christie; Culpepper, Steven

    2013-01-01

    Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways. PMID:23619999

  3. Young women's dynamic family size preferences in the context of transitioning fertility.

    PubMed

    Yeatman, Sara; Sennott, Christie; Culpepper, Steven

    2013-10-01

    Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways.

  4. The Impact of Desired Family Size Upon Family Planning Practices in Rural East Pakistan

    ERIC Educational Resources Information Center

    Mosena, Patricia Wimberley

    1971-01-01

    Results indicated that women whose desired family size is equal to or less than their actual family size have significantly greater frequencies practicing family planning than women whose desired size exceeds their actual size. (Author)

  5. [Distribution and abundance of the lionfish Pterois volitans (Scorpaeniformes: Scorpaenidae) and associated native species in Parque Marino Cayos de San Felipe, Cuba].

    PubMed

    de la Guardia, Elena; Cobián Rojas, Dorka; Espinosa, Leonardo; Hernández, Zaimiuri; García, Lázaro; Arias González, Jesús Ernesto

    2017-03-01

    The first lionfish sighting at the National Park "Cayos de San Felipe" was in 2009 and could be a threat to its marine ecosystem diversity and their capacity to generate services. To analyze the incidence of the lionfish invasion in the area, an annual sampling was conducted between 2013 and 2015. Lionfish abundance and size was investigated on mangroves through visual census on ten transects of 30x2 m/station, and on coral reefs (15 and 25 m deep) with stereo video on six transects of 50x2 m/station. Additionally, incidence of potential native competitors and predators on coral reefs were also estimated. Over the three years, the average density of lionfish varied between 0.0-1.3 indiv./100 m2 per sample stations and it was not significantly different among habitats (mangroves with 0.6 indiv./100 m2, reefs at 15 m - 0.4 indiv./100 m2 and reef at 25 m with 0.3 indiv./100 m2). Lionfish’s density was equal to or lower than competitors’ density, and was equal to or higher than predator’s density in both depths. While lionfish density on mangroves and on reefs at 25 m remained temporally stable, it decreased on reefs at 15 m. Temporary increase in the competitor’s density was observed and the predator´s density did not change during the monitored time. Lionfish size varied between 5 and 39 cm; the average fish size from mangroves (12.6 cm) was consistently lower than from reefs (25.2 cm) and showed no variations among years. Lionfish size in reefs was higher than competitor’s size and lower than that of predator. Results showed that in the park: 1) mangroves represent lionfish nursery areas; 2) incidence of reef lionfish was not as high as in other areas of Cuba and the Caribbean; and 3) lionfish abundance in reefs tended to decrease over the years, without the intervention of extractive activities or high abundance of large size native groupers. In this sense, recommendations are made to continue monitoring and to investigate lionfish effects and factors that are regulating its incidence in the park.

  6. Change-in-ratio estimators for populations with more than two subclasses

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1991-01-01

    Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.

  7. Efficient design and inference for multistage randomized trials of individualized treatment policies.

    PubMed

    Dawson, Ree; Lavori, Philip W

    2012-01-01

    Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.

  8. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  9. Methodological issues with adaptation of clinical trial design.

    PubMed

    Hung, H M James; Wang, Sue-Jane; O'Neill, Robert T

    2006-01-01

    Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.

  10. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    PubMed

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  11. Electromagnetic properties of photodefinable barium ferrite polymer composites

    NASA Astrophysics Data System (ADS)

    Sholiyi, Olusegun; Lee, Jaejin; Williams, John D.

    2014-07-01

    This article reports the magnetic and microwave properties of a Barium ferrite powder suspended in a polymer matrix. The sizes for Barium hexaferrite powder are 3-6 μm for coarse and 0.8-1.0 μm for the fine powder. Ratios 1:1 and 3:1 (by mass) of ferrite to SU8 samples were characterized and analyzed for predicting the necessary combinations of these powders with SU8 2000 Negative photoresist. The magnetization properties of these materials were equally determined and were analyzed using Vibrating Sample Magnetometer (VSM). The Thru, Reflect, Line (TRL) calibration technique was employed in determining complex relative permittivity and permeability of the powders and composites with SU8 between 26.5 and 40 GHz.

  12. Environmental factors controlling the distribution of rhodoliths: An integrated study based on seafloor sampling, ROV and side scan sonar data, offshore the W-Pontine Archipelago

    NASA Astrophysics Data System (ADS)

    Sañé, E.; Chiocci, F. L.; Basso, D.; Martorelli, E.

    2016-10-01

    The effects of different environmental factors controlling the distribution of different morphologies, sizes and growth forms of rhodoliths in the western Pontine Archipelago have been studied. The analysis of 231 grab samples has been integrated with 68 remotely operated vehicle (ROV) videos (22 h) and a high resolution (<1 m) side scan sonar mosaic of the seafloor surrounding the Archipelago, covering an area of approximately 460 km2. Living rhodoliths were collected in approximately 10% of the grab samples and observed in approximately 30% of the ROV dives. The combination of sediment sampling, video surveys and acoustic facies mapping suggested that the presence of rhodoliths can be associated to the dishomogeneous high backscatter sonar facies and high backscatter facies. Both pralines and unattached branches were found to be the most abundant morphological groups (50% and 41% of samples, respectively), whereas boxwork rhodoliths were less common, accounting only for less than 10% of the total number of samples. Pralines and boxwork rhodoliths were almost equally distributed among large (28%), medium (36%) and small sizes (36%). Pralines generally presented a fruticose growth form (49% of pralines) even if pralines with encrusting-warty (36% of pralines) or lumpy (15% of pralines) growth forms were also present. Morphologies, sizes and growth forms vary mainly along the depth gradient. Large rhodoliths with a boxwork morphology are abundant at depth, whereas unattached branches and, in general, rhodoliths with a high protuberance degree are abundant in shallow waters. The exposure to storm waves and bottom currents related to geostrofic circulation could explain the absence of rhodoliths off the eastern side of the three islands forming the Archipelago.

  13. Nanomesh phononic structures for low thermal conductivity and thermoelectric energy conversion materials

    DOEpatents

    Yu, Jen-Kan; Mitrovic, Slobodan; Heath, James R.

    2016-08-16

    A nanomesh phononic structure includes: a sheet including a first material, the sheet having a plurality of phononic-sized features spaced apart at a phononic pitch, the phononic pitch being smaller than or equal to twice a maximum phonon mean free path of the first material and the phononic size being smaller than or equal to the maximum phonon mean free path of the first material.

  14. Will Outer Tropical Cyclone Size Change due to Anthropogenic Warming?

    NASA Astrophysics Data System (ADS)

    Schenkel, B. A.; Lin, N.; Chavas, D. R.; Vecchi, G. A.; Knutson, T. R.; Oppenheimer, M.

    2017-12-01

    Prior research has shown significant interbasin and intrabasin variability in outer tropical cyclone (TC) size. Moreover, outer TC size has even been shown to vary substantially over the lifetime of the majority of TCs. However, the factors responsible for both setting initial outer TC size and determining its evolution throughout the TC lifetime remain uncertain. Given these gaps in our physical understanding, there remains uncertainty in how outer TC size will change, if at all, due to anthropogenic warming. The present study seeks to quantify whether outer TC size will change significantly in response to anthropogenic warming using data from a high-resolution global climate model and a regional hurricane model. Similar to prior work, the outer TC size metric used in this study is the radius in which the azimuthal-mean surface azimuthal wind equals 8 m/s. The initial results from the high-resolution global climate model data suggest that the distribution of outer TC size shifts significantly towards larger values in each global TC basin during future climates, as revealed by 1) statistically significant increase of the median outer TC size by 5-10% (p<0.05) according to a 1,000-sample bootstrap resampling approach with replacement and 2) statistically significant differences between distributions of outer TC size from current and future climate simulations as shown using two-sample Kolmogorov Smirnov testing (p<<0.01). Additional analysis of the high-resolution global climate model data reveals that outer TC size does not uniformly increase within each basin in future climates, but rather shows substantial locational dependence. Future work will incorporate the regional mesoscale hurricane model data to help focus on identifying the source of the spatial variability in outer TC size increases within each basin during future climates and, more importantly, why outer TC size changes in response to anthropogenic warming.

  15. Hypoglossal canal size and hominid speech

    PubMed Central

    DeGusta, David; Gilbert, W. Henry; Turner, Scott P.

    1999-01-01

    The mammalian hypoglossal canal transmits the nerve that supplies the motor innervation to the tongue. Hypoglossal canal size has previously been used to date the origin of human-like speech capabilities to at least 400,000 years ago and to assign modern human vocal abilities to Neandertals. These conclusions are based on the hypothesis that the size of the hypoglossal canal is indicative of speech capabilities. This hypothesis is falsified here by the finding of numerous nonhuman primate taxa that have hypoglossal canals in the modern human size range, both absolutely and relative to oral cavity volume. Specimens of Australopithecus afarensis, Australopithecus africanus, and Australopithecus boisei also have hypoglossal canals that, both absolutely and relative to oral cavity volume, are equal in size to those of modern humans. The basis for the hypothesis that hypoglossal canal size is indicative of speech was the assumption that hypoglossal canal size is correlated with hypoglossal nerve size, which in turn is related to tongue function. This assumption is probably incorrect, as we found no apparent correlation between the size of the hypoglossal nerve, or the number of axons it contains, and the size of the hypoglossal canal in a sample of cadavers. Our data demonstrate that the size of the hypoglossal canal does not reflect vocal capabilities or language usage. Thus the date of origin for human language and the speech capabilities of Neandertals remain open questions. PMID:9990105

  16. Investigation of Microstructure and Mechanical Properties of ECAP-Processed AM Series Magnesium Alloy

    NASA Astrophysics Data System (ADS)

    Gopi, K. R.; Nayaka, H. Shivananda; Sahu, Sandeep

    2016-09-01

    Magnesium alloy Mg-Al-Mn (AM70) was processed by equal channel angular pressing (ECAP) at 275 °C for up to 4 passes in order to produce ultrafine-grained microstructure and improve its mechanical properties. ECAP-processed samples were characterized for microstructural analysis using optical microscopy, scanning electron microscopy, and transmission electron microscopy. Microstructural analysis showed that, with an increase in the number of ECAP passes, grains refined and grain size reduced from an average of 45 to 1 µm. Electron backscatter diffraction analysis showed the transition from low angle grain boundaries to high angle grain boundaries in ECAP 4 pass sample as compared to as-cast sample. The strength and hardness values an showed increasing trend for the initial 2 passes of ECAP processing and then started decreasing with further increase in the number of ECAP passes, even though the grain size continued to decrease in all the successive ECAP passes. However, the strength and hardness values still remained quite high when compared to the initial condition. This behavior was found to be correlated with texture modification in the material as a result of ECAP processing.

  17. Sample size determination in combinatorial chemistry.

    PubMed Central

    Zhao, P L; Zambias, R; Bolognese, J A; Boulton, D; Chapman, K

    1995-01-01

    Combinatorial chemistry is gaining wide appeal as a technique for generating molecular diversity. Among the many combinatorial protocols, the split/recombine method is quite popular and particularly efficient at generating large libraries of compounds. In this process, polymer beads are equally divided into a series of pools and each pool is treated with a unique fragment; then the beads are recombined, mixed to uniformity, and redivided equally into a new series of pools for the subsequent couplings. The deviation from the ideal equimolar distribution of the final products is assessed by a special overall relative error, which is shown to be related to the Pearson statistic. Although the split/recombine sampling scheme is quite different from those used in analysis of categorical data, the Pearson statistic is shown to still follow a chi2 distribution. This result allows us to derive the required number of beads such that, with 99% confidence, the overall relative error is controlled to be less than a pregiven tolerable limit L1. In this paper, we also discuss another criterion, which determines the required number of beads so that, with 99% confidence, all individual relative errors are controlled to be less than a pregiven tolerable limit L2 (0 < L2 < 1). PMID:11607586

  18. Effects of Initial Powder Size on the Mechanical Properties and Microstructure of As-Extruded GRCop-84

    NASA Technical Reports Server (NTRS)

    Okoro, Chika L.

    2004-01-01

    GRCop-84 was developed to meet the mechanical and thermal property requirements for advanced regeneratively cooled rocket engine main combustion chamber liners. It is a ternary Cu- Cr-Nb alloy having approximately 8 at% Cr and 4 at% Nb. The chromium and niobium constituents combine to form 14 vol% Cr2Nb, the strengthening phase. The alloy is made by producing GRCop-84 powder through gas atomization and consolidating the powder using extrusion, hot isostatic pressing (HIP) or vacuum plasma spraying (VPS). GRCop-84 has been selected by Rocketdyne, Ratt & Wlutney and Aerojet for use in their next generation of rocket engines. GRCop-84 demonstrates favorable mechanical and thermal properties at elevated temperatures. Compared to NARloy-Z, the currently used inaterial in the Space Shuttle, GRCop-84 has approximately twice the yield strength, 10-1000 times the creep life, and 1.5-2.5 times the low cycle fatigue life. The thermal expansion of GRCop-84 is 7515% less than NARloy-Z which minimizes thermally induced stresses. The thermal conductivity of the two alloys is comparable at low temperature but NARloy-Z has a 20-50 W/mK thermal conductivity advantage at typical rocket engine hot wall temperatures. GRCop-84 is also much more microstructurally stable than NARloy-Z which translates into better long term stability of mechanical properties. Previous research into metal alloys fabricated by means of powder metallurgy (PM), has demonstrated that initial powder size can affect the microstructural development and mechanical properties of such materials. Grain size, strength, ductility, size of second phases, etc., have all been shown to vary with starting powder size in PM-alloys. This work focuses on characterizing the effect of varying starting powder size on the microstructural evolution and mechanical properties of as- extruded GRCop-84. Tensile tests and constant load creep tests were performed on extrusions of four powder meshes: +140 mesh (great3er than l05 micron powder size), -140 mesh (less than or equal to 105 microns), -140 plus or minus 270 (53 - 105 microns), and - 270 mesh (less than or equal to 53 microns). Samples were tested in tension at room temperature and at 500 C (932 F). Creep tests were performed under vacuum at 500 C using a stress of 111 MPa (16.1 ksi). The fracture surfaces of selected samples from both tests were studied using a Scanning Electron Microscope (SEM). The as-extruded materials were also studied, using both optical microscopy and SEM analysis, to characterize changes within the microstructure.

  19. Mechanical Properties and Microstructure of AZ31B Magnesium Alloy Processed by I-ECAP

    NASA Astrophysics Data System (ADS)

    Gzyl, Michal; Rosochowski, Andrzej; Pesci, Raphael; Olejnik, Lech; Yakushina, Evgenia; Wood, Paul

    2014-03-01

    Incremental equal channel angular pressing (I-ECAP) is a severe plastic deformation process used to refine grain size of metals, which allows processing very long billets. As described in the current article, an AZ31B magnesium alloy was processed for the first time by three different routes of I-ECAP, namely, A, BC, and C, at 523 K (250 °C). The structure of the material was homogenized and refined to ~5 microns of the average grain size, irrespective of the route used. Mechanical properties of the I-ECAPed samples in tension and compression were investigated. Strong influence of the processing route on yield and fracture behavior of the material was established. It was found that texture controls the mechanical properties of AZ31B magnesium alloy subjected to I-ECAP. SEM and OM techniques were used to obtain microstructural images of the I-ECAPed samples subjected to tension and compression. Increased ductility after I-ECAP was attributed to twinning suppression and facilitation of slip on basal plane. Shear bands were revealed in the samples processed by I-ECAP and subjected to tension. Tension-compression yield stress asymmetry in the samples tested along extrusion direction was suppressed in the material processed by routes BC and C. This effect was attributed to textural development and microstructural homogenization. Twinning activities in fine- and coarse-grained samples have also been studied.

  20. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Ground-Water Quality and Potential Effects of Individual Sewage Disposal System Effluent on Ground-Water Quality in Park County, Colorado, 2001-2004

    USGS Publications Warehouse

    Miller, Lisa D.; Ortiz, Roderick F.

    2007-01-01

    In 2000, the U.S. Geological Survey, in cooperation with Park County, Colorado, began a study to evaluate ground-water quality in the various aquifers in Park County that supply water to domestic wells. The focus of this study was to identify and describe the principal natural and human factors that affect ground-water quality. In addition, the potential effects of individual sewage disposal system (ISDS) effluent on ground-water quality were evaluated. Ground-water samples were collected from domestic water-supply wells from July 2001 through October 2004 in the alluvial, crystalline-rock, sedimentary-rock, and volcanic-rock aquifers to assess general ground-water quality and effects of ISDS's on ground-water quality throughout Park County. Samples were analyzed for physical properties, major ions, nutrients, bacteria, and boron; and selected samples also were analyzed for dissolved organic carbon, human-related (wastewater) compounds, trace elements, radionuclides, and age-dating constituents (tritium and chlorofluorocarbons). Drinking-water quality is adequate for domestic use throughout Park County with a few exceptions. Only about 3 percent of wells had concentrations of fluoride, nitrate, and (or) uranium that exceeded U.S. Environmental Protection Agency national, primary drinking-water standards. These primary drinking-water standards were exceeded only in wells completed in the crystalline-rock aquifers in eastern Park County. Escherichia coli bacteria were detected in one well near Guffey, and total coliform bacteria were detected in about 11 percent of wells sampled throughout the county. The highest total coliform concentrations were measured southeast of the city of Jefferson and west of Tarryall Reservoir. Secondary drinking-water standards were exceeded more frequently. About 19 percent of wells had concentrations of one or more constituents (pH, chloride, fluoride, sulfate, and dissolved solids) that exceeded secondary drinking-water standards. Currently (2004), there is no federally enforced drinking-water standard for radon in public water-supply systems, but proposed regulations suggest a maximum contaminant level of 300 picocuries per liter (pCi/L) and an alternative maximum contaminant level of 4,000 pCi/L contingent on other mitigating remedial activities to reduce radon levels in indoor air. Radon concentrations in about 91 percent of ground-water samples were greater than or equal to 300 pCi/L, and about 25 percent had radon concentrations greater than or equal to 4,000 pCi/L. Generally, the highest radon concentrations were measured in samples collected from wells completed in the crystalline-rock aquifers. Analyses of ground-water-quality data indicate that recharge from ISDS effluent has affected some local ground-water systems in Park County. Because roughly 90 percent of domestic water used is assumed to be recharged by ISDS's, detections of human-related (wastewater) compounds in ground water in Park County are not surprising; however, concentrations of constituents associated with ISDS effluent generally are low (concentrations near the laboratory reporting levels). Thirty-eight different organic wastewater compounds were detected in 46 percent of ground-water samples, and the number of compounds detected per sample ranged from 1 to 17 compounds. Samples collected from wells with detections of wastewater compounds also had significantly higher (p-value < 0.05) chloride and boron concentrations than samples from wells with no detections of wastewater compounds. ISDS density (average subdivision lot size used to estimate ISDS density) was related to ground-water quality in Park County. Chloride and boron concentrations were significantly higher in ground-water samples collected from wells located in areas that had average subdivision lot sizes of less than 1 acre than in areas that had average subdivision lot sizes greater than or equal to 1 acre. For wells completed in the crystalline-

  2. From Lucy to Kadanuumuu: balanced analyses of Australopithecus afarensis assemblages confirm only moderate skeletal dimorphism.

    PubMed

    Reno, Philip L; Lovejoy, C Owen

    2015-01-01

    Sexual dimorphism in body size is often used as a correlate of social and reproductive behavior in Australopithecus afarensis. In addition to a number of isolated specimens, the sample for this species includes two small associated skeletons (A.L. 288-1 or "Lucy" and A.L. 128/129) and a geologically contemporaneous death assemblage of several larger individuals (A.L. 333). These have driven both perceptions and quantitative analyses concluding that Au. afarensis was markedly dimorphic. The Template Method enables simultaneous evaluation of multiple skeletal sites, thereby greatly expanding sample size, and reveals that A. afarensis dimorphism was similar to that of modern humans. A new very large partial skeleton (KSD-VP-1/1 or "Kadanuumuu") can now also be used, like Lucy, as a template specimen. In addition, the recently developed Geometric Mean Method has been used to argue that Au. afarensis was equally or even more dimorphic than gorillas. However, in its previous application Lucy and A.L. 128/129 accounted for 10 of 11 estimates of female size. Here we directly compare the two methods and demonstrate that including multiple measurements from the same partial skeleton that falls at the margin of the species size range dramatically inflates dimorphism estimates. Prevention of the dominance of a single specimen's contribution to calculations of multiple dimorphism estimates confirms that Au. afarensis was only moderately dimorphic.

  3. From Lucy to Kadanuumuu: balanced analyses of Australopithecus afarensis assemblages confirm only moderate skeletal dimorphism

    PubMed Central

    Lovejoy, C. Owen

    2015-01-01

    Sexual dimorphism in body size is often used as a correlate of social and reproductive behavior in Australopithecus afarensis. In addition to a number of isolated specimens, the sample for this species includes two small associated skeletons (A.L. 288-1 or “Lucy” and A.L. 128/129) and a geologically contemporaneous death assemblage of several larger individuals (A.L. 333). These have driven both perceptions and quantitative analyses concluding that Au. afarensis was markedly dimorphic. The Template Method enables simultaneous evaluation of multiple skeletal sites, thereby greatly expanding sample size, and reveals that A. afarensis dimorphism was similar to that of modern humans. A new very large partial skeleton (KSD-VP-1/1 or “Kadanuumuu”) can now also be used, like Lucy, as a template specimen. In addition, the recently developed Geometric Mean Method has been used to argue that Au. afarensis was equally or even more dimorphic than gorillas. However, in its previous application Lucy and A.L. 128/129 accounted for 10 of 11 estimates of female size. Here we directly compare the two methods and demonstrate that including multiple measurements from the same partial skeleton that falls at the margin of the species size range dramatically inflates dimorphism estimates. Prevention of the dominance of a single specimen’s contribution to calculations of multiple dimorphism estimates confirms that Au. afarensis was only moderately dimorphic. PMID:25945314

  4. Relaxation of Selection With Equalization of Parental Contributions in Conservation Programs: An Experimental Test With Drosophila melanogaster

    PubMed Central

    Rodríguez-Ramilo, S. T.; Morán, P.; Caballero, A.

    2006-01-01

    Equalization of parental contributions is one of the most simple and widely recognized methods to maintain genetic diversity in conservation programs, as it halves the rate of increase in inbreeding and genetic drift. It has, however, the negative side effect of implying a reduced intensity of natural selection so that deleterious genes are less efficiently removed from the population with possible negative consequences on the reproductive capacity of the individuals. Theoretical results suggest that the lower fitness resulting from equalization of family sizes relative to that for free contribution schemes is expected to be substantial only for relatively large population sizes and after many generations. We present a long-term experiment with Drosophila melanogaster, comparing the fitness performance of lines maintained with equalization of contributions (EC) and others maintained with no management (NM), allowing for free matings and contributions from parents. Two (five) replicates of size N = 100 (20) individuals of each type of line were maintained for 38 generations. As expected, EC lines retained higher gene diversity and allelic richness for four microsatellite markers and a higher heritability for sternopleural bristle number. Measures of life-history traits, such as egg-to-adult viability, mating success, and global fitness declined with generations, but no significant differences were observed between EC and NM lines. Our results, therefore, provide no evidence to suggest that equalization of family sizes entails a disadvantage on the reproductive capacity of conserved populations in comparison with no management procedures, even after long periods of captivity. PMID:16299385

  5. THE NEGRO AND EQUAL EMPLOYMENT OPPORTUNITIES, A REVIEW OF MANAGEMENT EXPERIENCES IN TWENTY COMPANIES.

    ERIC Educational Resources Information Center

    FERMAN, LOUIS A.

    TO STUDY THE APPLICATION OF EQUAL EMPLOYMENT PRACTICES IN COMPANY SETTINGS AND TO ASSESS THE IMPACT OF THESE PRACTICES ON MINORITY GROUP EMPLOYMENT, 20 COMPANIES WITH VARYING EMPLOYMENT STRUCTURE, INDUSTRY, SIZE, NUMBER OF BRANCH UNITS, GEOGRAPHICAL SPREAD, AND PRODUCT OR SERVICE WERE STUDIED. ALL WERE TRYING TO PROMOTE EQUAL OPPORTUNITIES IN…

  6. Improvement of Strength and Energy Absorption Properties of Porous Aluminum Alloy with Aligned Unidirectional Pores Using Equal-Channel Angular Extrusion

    NASA Astrophysics Data System (ADS)

    Yoshida, Tomonori; Muto, Daiki; Tamai, Tomoya; Suzuki, Shinsuke

    2018-04-01

    Porous aluminum alloy with aligned unidirectional pores was fabricated by dipping A1050 tubes into A6061 semi-solid slurry. The porous aluminum alloy was processed through Equal-channel Angular Extrusion (ECAE) while preventing cracking and maintaining both the pore size and porosity by setting the insert material and loading back pressure. The specific compressive yield strength of the sample aged after 13 passes of ECAE was approximately 2.5 times higher than that of the solid-solutionized sample without ECAE. Both the energy absorption E V and energy absorption efficiency η V after four passes of ECAE were approximately 1.2 times higher than that of the solid-solutionized sample without ECAE. The specific yield strength was improved via work hardening and precipitation following dynamic aging during ECAE. E V was improved by the application of high compressive stress at the beginning of the compression owing to work hardening via ECAE. η V was improved by a steep increase of stress at low compressive strain and by a gradual increase of stress in the range up to 50 pct of compressive strain. The gradual increase of stress was caused by continuous shear fracture in the metallic part, which was due to the high dislocation density and existence of unidirectional pores parallel to the compressive direction in the structure.

  7. Improvement of Strength and Energy Absorption Properties of Porous Aluminum Alloy with Aligned Unidirectional Pores Using Equal-Channel Angular Extrusion

    NASA Astrophysics Data System (ADS)

    Yoshida, Tomonori; Muto, Daiki; Tamai, Tomoya; Suzuki, Shinsuke

    2018-06-01

    Porous aluminum alloy with aligned unidirectional pores was fabricated by dipping A1050 tubes into A6061 semi-solid slurry. The porous aluminum alloy was processed through Equal-channel Angular Extrusion (ECAE) while preventing cracking and maintaining both the pore size and porosity by setting the insert material and loading back pressure. The specific compressive yield strength of the sample aged after 13 passes of ECAE was approximately 2.5 times higher than that of the solid-solutionized sample without ECAE. Both the energy absorption E V and energy absorption efficiency η V after four passes of ECAE were approximately 1.2 times higher than that of the solid-solutionized sample without ECAE. The specific yield strength was improved via work hardening and precipitation following dynamic aging during ECAE. E V was improved by the application of high compressive stress at the beginning of the compression owing to work hardening via ECAE. η V was improved by a steep increase of stress at low compressive strain and by a gradual increase of stress in the range up to 50 pct of compressive strain. The gradual increase of stress was caused by continuous shear fracture in the metallic part, which was due to the high dislocation density and existence of unidirectional pores parallel to the compressive direction in the structure.

  8. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    PubMed

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  9. Fano Resonance of Eu2+ and Eu3+ in (Eu,Gd)Te MBE Layers

    NASA Astrophysics Data System (ADS)

    Orlowski, B. A.; Kowalski, B. J.; Dziawa, P.; Pietrzyk, M.; Mickievicius, S.; Osinniy, V.; Taliashvili, B.; Kowalik, I. A.; Story, T.; Johnson, R. L.

    2006-11-01

    Resonant photoemission spectroscopy, with application of synchrotron radiation, was used to study the valence band electronic structure of clean surface of (EuGd)Te layers. Fano-type resonant photoemission spectra corresponding to the Eu 4d-4f transition were measured to determine the contribution of 4f electrons of Eu2+ and Eu3+ ions to the valence band. The resonant and antiresonant photon energies of Eu2+ ions were found as equal to 141 V and 132 eV, respectively and for Eu3+ ions were found as equal to 146 eV and 132 eV, respectively. Contribution of Eu2+4f electrons was found at the valence band edge while for Eu3+ it was located in the region between 3.5 eV and 8.5 eV below the valence band edge.

  10. A Modified Kolmogorov-Smirnov, Anderson-Darling, and Cramer-Von Mises Test for the Cauchy Distribution with Unknown Location and Scale Parameters.

    DTIC Science & Technology

    1985-12-01

    statistics, each of the a levels fall. The mirror image of this is to work with the percentiles, or the I - a levels . These then become the minimum...To be valid, the power would have to be close to the *-levels, and that Is the case. The powers are not exactly equal to the a - levels , but that is a...Information available increases with sample size. When a - levels are analyzed, for a = .0 1, the only reasonable power Is 33 L 4 against the

  11. Experiential Teaching Increases Medication Calculation Accuracy Among Baccalaureate Nursing Students.

    PubMed

    Hurley, Teresa V

    Safe medication administration is an international goal. Calculation errors cause patient harm despite education. The research purpose was to evaluate the effectiveness of an experiential teaching strategy to reduce errors in a sample of 78 baccalaureate nursing students at a Northeastern college. A pretest-posttest design with random assignment into equal-sized groups was used. The experiential strategy was more effective than the traditional method (t = -0.312, df = 37, p = .004, 95% CI) with a reduction in calculation errors. Evaluations of error type and teaching strategies are indicated to facilitate course and program changes.

  12. Estimate of the size and demographic structure of the owned dog and cat population living in Veneto region (north-eastern Italy).

    PubMed

    Capello, Katia; Bortolotti, Laura; Lanari, Manuela; Baioni, Elisa; Mutinelli, Franco; Vascellari, Marta

    2015-01-01

    The knowledge of the size and demographic structure of animal populations is a necessary prerequisite for any population-based epidemiological study, especially to ascertain and interpret prevalence data, to implement surveillance plans in controlling zoonotic diseases and, moreover, to provide accurate estimates of tumours incidence data obtained by population-based registries. The main purpose of this study was to provide an accurate estimate of the size and structure of the canine population in Veneto region (north-eastern Italy), using the Lincoln-Petersen version of the capture-recapture methodology. The Regional Canine Demographic Registry (BAC) and a sample survey of households of Veneto Region were the capture and recapture sources, respectively. The secondary purpose was to estimate the size and structure of the feline population in the same region, using the same survey applied for dog population. A sample of 2465 randomly selected households was drawn and submitted to a questionnaire using the CATI technique, in order to obtain information about the ownership of dogs and cats. If the dog was declared to be identified, owner's information was used to recapture the dog in the BAC. The study was conducted in Veneto Region during 2011, when the dog population recorded in the BAC was 605,537. Overall, 616 households declared to possess at least one dog (25%), with a total of 805 dogs and an average per household of 1.3. The capture-recapture analysis showed that 574 dogs (71.3%, 95% CI: 68.04-74.40%) had been recaptured in both sources, providing a dog population estimate of 849,229 (95% CI: 814,747-889,394), 40% higher than that registered in the BAC. Concerning cats, 455 of 2465 (18%, 95% CI: 17-20%) households declared to possess at least one cat at the time of the telephone interview, with a total of 816 cats. The mean number of cats per household was equal to 1.8, providing an estimate of the cat population in Veneto region equal to 663,433 (95% CI: 626,585-737,159). The estimate of the size and structure of owned canine and feline populations in Veneto region provide useful data to perform epidemiological studies and monitoring plans in this area. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Effect of microstructure on the thermoelectric performance of La{sub 1−x}Sr{sub x}CoO{sub 3}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viskadourakis, Z.; Department of Mechanical and Manufacturing Engineering, University of Cypruss, 75 Kallipoleos Avenue, P.O. Box 20537, 1678 Nicosia; Athanasopoulos, G.I.

    We present a case where the microstructure has a profound effect on the thermoelectric properties of oxide compounds. Specifically, we have investigated the effect of different sintering treatments on La{sub 1−x}Sr{sub x}CoO{sub 3} samples synthesized using the Pechini method. We found that the samples, which are dense and consist of inhomogeneously-mixed grains of different size, exhibit both higher Seebeck coefficient and thermoelectric figure of merit than the samples, which are porous and consist of grains with almost identical size. The enhancement of Seebeck coefficient in the dense samples is attributed to the so-called “energy-filtering” mechanism that is related to themore » energy barrier of the grain boundary. On the other hand, the thermal conductivity for the porous compounds is significantly reduced in comparison to the dense compounds. It is suggested that a fine-manipulation of grain size ratio combined with a fine-tuning of porosity could considerably enhance the thermoelectric performance of oxides. - Graphical abstract: The enhancement of the dimensionless thermoelectric figure ZT of merit is presented for two equally Sr-doped LaCoO3 compounds, possessing different microstructure, indicating the effect of the latter to the thermoelectric performance of the La{sub 1−x}Sr{sub x}CoO{sub 3} solid solution. - Highlights: • Electrical and thermal transport properties are affected by the microstructure in La{sub 1−x}Sr{sub x}CoO{sub 3} polycrystalline materials. • Coarse/fine grain size distribution enhances the Seebeck coefficient. • Porosity reduces the thermal conductivity in La{sub 1−x}Sr{sub x}CoO{sub 3} polycrystalline samples. • The combination of large/small grain ratio distribution with the high porosity may result to the enhancement of the thermoelectric performance of the material.« less

  14. An Evaluation of Sharp Cut Cyclones for Sampling Diesel Particulate Matter Aerosol in the Presence of Respirable Dust

    PubMed Central

    Cauda, Emanuele; Sheehan, Maura; Gussman, Robert; Kenny, Lee; Volkwein, Jon

    2015-01-01

    Two prototype cyclones were the subjects of a comparative research campaign with a diesel particulate matter sampler (DPMS) that consists of a respirable cyclone combined with a downstream impactor. The DPMS is currently used in mining environments to separate dust from the diesel particulate matter and to avoid interferences in the analysis of integrated samples and direct-reading monitoring in occupational environments. The sampling characteristics of all three devices were compared using ammonium fluorescein, diesel, and coal dust aerosols. With solid spherical test aerosols at low particle loadings, the aerodynamic size-selection characteristics of all three devices were found to be similar, with 50% penetration efficiencies (d50) close to the design value of 0.8 µm, as required by the US Mine Safety and Health Administration for monitoring occupational exposure to diesel particulate matter in US mining operations. The prototype cyclones were shown to have ‘sharp cut’ size-selection characteristics that equaled or exceeded the sharpness of the DPMS. The penetration of diesel aerosols was optimal for all three samplers, while the results of the tests with coal dust induced the exclusion of one of the prototypes from subsequent testing. The sampling characteristics of the remaining prototype sharp cut cyclone (SCC) and the DPMS were tested with different loading of coal dust. While the characteristics of the SCC remained constant, the deposited respirable coal dust particles altered the size-selection performance of the currently used sampler. This study demonstrates that the SCC performed better overall than the DPMS. PMID:25060240

  15. Size-based trends and management implications of microhabitat utilization by Brown Treesnakes, with an emphasis on juvenile snakes

    USGS Publications Warehouse

    Rodda, Gordon H.; Reed, Robert N.

    2007-01-01

    The brown treesnake (Boiga irregularis, or BTS), a costly invasive species, has been the subject of intensive research on Guam over the past two decades. The behavior and habitat use of hatchling and juvenile snakes, however, remain largely unknown. We used a long-term dataset of BTS captures (N = 2,415) and a dataset resulting from intensive sampling within and immediately around a 5-ha fenced population (N = 2,541) to examine habitat use of BTS. Small snakes were almost exclusively arboreal and that they appeared to prefer tangantangan (Leucaena leucocephala) habitats. In contrast, large snakes used arboreal and terrestrial habitats in roughly equal proportion, and were less frequently found in tangantangan. Among snakes found in trees, there were no clear size-based preferences for certain heights above ground, nor for size-based choice of perch diameters. We discuss these results as they relate to management and interdiction implications for brown treesnakes on Guam and in potential incipient populations on other islands.

  16. Piezoresistive AFM cantilevers surpassing standard optical beam deflection in low noise topography imaging

    PubMed Central

    Dukic, Maja; Adams, Jonathan D.; Fantner, Georg E.

    2015-01-01

    Optical beam deflection (OBD) is the most prevalent method for measuring cantilever deflections in atomic force microscopy (AFM), mainly due to its excellent noise performance. In contrast, piezoresistive strain-sensing techniques provide benefits over OBD in readout size and the ability to image in light-sensitive or opaque environments, but traditionally have worse noise performance. Miniaturisation of cantilevers, however, brings much greater benefit to the noise performance of piezoresistive sensing than to OBD. In this paper, we show both theoretically and experimentally that by using small-sized piezoresistive cantilevers, the AFM imaging noise equal or lower than the OBD readout noise is feasible, at standard scanning speeds and power dissipation. We demonstrate that with both readouts we achieve a system noise of ≈0.3 Å at 20 kHz measurement bandwidth. Finally, we show that small-sized piezoresistive cantilevers are well suited for piezoresistive nanoscale imaging of biological and solid state samples in air. PMID:26574164

  17. Estimating and comparing microbial diversity in the presence of sequencing errors

    PubMed Central

    Chiu, Chun-Huo

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872

  18. M551 metals melting experiment. [space manufacturing of aluminum alloys, tantalum alloys, stainless steels

    NASA Technical Reports Server (NTRS)

    Li, C. H.; Busch, G.; Creter, C.

    1976-01-01

    The Metals Melting Skylab Experiment consisted of selectively melting, in sequence, three rotating discs made of aluminum alloy, stainless steel, and tantalum alloy. For comparison, three other discs of the same three materials were similarly melted or welded on the ground. The power source of the melting was an electron beam unit. Results are presented which support the concept that the major difference between ground base and Skylab samples (i.e., large elongated grains in ground base samples versus nearly equiaxed and equal sized grains in Skylab samples) can be explained on the basis of constitutional supercooling, and not on the basis of surface phenomena. Microstructural observations on the weld samples and present explanations for some of these observations are examined. In particular, ripples and their implications to weld solidification were studied. Evidence of pronounced copper segregation in the Skylab A1 weld samples, and the tantalum samples studied, indicates a weld microhardness (and hence strength) that is uniformly higher than the ground base results, which is in agreement with previous predictions. Photographs are shown of the microstructure of the various alloys.

  19. Plasma kinetics of chylomicron-like emulsion and lipid transfers to high-density lipoprotein (HDL) in lacto-ovo vegetarian and in omnivorous subjects.

    PubMed

    Vinagre, Juliana C; Vinagre, Carmen C G; Pozzi, Fernanda S; Zácari, Cristiane Z; Maranhão, Raul C

    2014-04-01

    Previously, it was showed that vegan diet improves the metabolism of triglyceride-rich lipoproteins by increasing the plasma clearance of atherogenic remnants. The aim of the current study was to investigate this metabolism in lacto-ovo vegetarians whose diet is less strict, allowing the ingestion of eggs and milk. Transfer of lipids to HDL, an important step in HDL metabolism, was tested in vitro. Eighteen lacto-ovo vegetarians and 29 omnivorous subjects, all eutrophic and normolipidemic, were intravenously injected with triglyceride-rich emulsions labeled with ¹⁴C-cholesterol oleate and ³H-triolein. Fractional clearance rates (FCR, in min⁻¹) were calculated from samples collected during 60 min. Lipid transfer to HDL was assayed by incubating plasma samples with a donor nanoemulsion labeled with radioactive lipids. LDL cholesterol was lower in vegetarians than in omnivores (2.1 ± 0.8 and 2.7 ± 0.7 mmol/L, respectively, p < 0.05), but HDL cholesterol and triglycerides were equal. Cholesteryl ester FCR was greater in vegetarians than in omnivores (0.016 ± 0.012, 0.003 ± 0.003, p < 0.01), whereas triglyceride FCR was equal. Cholesteryl ester transfer to HDL was lower in vegetarians than in omnivores (2.7 ± 0.6, 3.5 ± 1.5 %, p < 0.05), but free cholesterol, triglyceride and phospholipid transfers and HDL size were equal. Similarly to vegans, lacto-ovo vegetarian diet increases remnant removal, as indicated by cholesteryl oleate FCR, which may favor atherosclerosis prevention, and has the ability to change lipid transfer to HDL.

  20. The Multigroup Ethnic Identity Measure-Revised: Measurement invariance across racial and ethnic groups

    PubMed Central

    Brown, Susan D.; Unger Hu, Kirsten A.; Mevi, Ashley A.; Hedderson, Monique M.; Shan, Jun; Quesenberry, Charles P.; Ferrara, Assiamira

    2014-01-01

    The Multigroup Ethnic Identity Measure-Revised (MEIM-R), a brief instrument assessing affiliation with one’s ethnic group, is a promising advance in the ethnic identity literature. However, equivalency of its measurement properties across specific racial and ethnic groups should be confirmed before using it in diverse samples. We examined a) the psychometric properties of the MEIM-R including factor structure, measurement invariance, and internal consistency reliability, and b) levels of and differences in ethnic identity across multiple racial and ethnic groups and subgroups. Asian (n = 630), Black/African American (n = 58), Hispanic (n = 240), multiethnic (n = 160), and White (n = 375) women completed the MEIM-R as part of the “Gestational diabetes’ Effect on Moms” diabetes prevention trial in the Kaiser Permanente Northern California health care setting (N = 1,463; M age 32.5 years, SD = 4.9). Multiple-groups confirmatory factor analyses provided provisional evidence of measurement invariance, i.e., an equal, correlated two-factor structure, equal factor loadings, and equal item intercepts across racial and ethnic groups. Latent factor means for the two MEIM-R subscales, exploration and commitment, differed across groups; effect sizes ranging from small to large generally supported the notion of ethnic identity as more salient among people of color. Pending replication, good psychometric properties in this large and diverse sample of women support the future use of the MEIM-R. Preliminary evidence of measurement invariance suggests that the MEIM-R could be used to measure and compare ethnic identity across multiple racial and ethnic groups. PMID:24188656

  1. Thermal stability of Cu-Cr-Zr alloy processed by equal-channel angular pressing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abib, Khadidja

    Thermal stability of a Cu-Cr-Zr alloy processed by equal-channel angular pressing up to16 passes was investigated using isochronal annealing ranging from 250 to 850 °C for 1 h. The microstructure, crystallographic texture and micro hardness of samples were characterized through electron back scatter diffraction and Vickers micro hardness measurements. The recrystallized grain size was stable between 250 °C and 500 °C then increased quickly. The achieved mean grain size, after 1, 4 and 16 ECAP passes, was around 5.5 μm. A discontinuous mode of recrystallization was found to occur and a Particle Simulated Nucleation mechanism was evidenced. The evolution ofmore » the high angle grain boundary fraction increased notably after annealing above 550 °C. The crystallographic texture after isochronal annealing was similar to that of ECAP simple shear, no change of the texture during annealing was observed but only slight intensity variations. Micro hardness of all Cu–Cr–Zr samples showed a hardening with two peaks at 400 and 500 °C associated with precipitation of Cu cluster and Cu{sub 5}Zr phase respectively, followed by a subsequent softening upon increasing the annealing temperature due to recrystallization. - Highlight: •The Cu-1Cr-0.1Zr alloy shows a very good thermal stability up to 550 °C after ECAP. •A discontinuous recrystallization was found to occur and PSN mechanism was evidenced. •The annealing texture was found weak and some new components appear. •Hardening is attributed to the Cr clustering followed by the Cu{sub 51}Zr{sub 14} precipitation. •Softening is a result of recrystallization and grain growth progressing.« less

  2. Calculating electronic tunnel currents in networks of disordered irregularly shaped nanoparticles by mapping networks to arrays of parallel nonlinear resistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghili Yajadda, Mir Massoud

    2014-10-21

    We have shown both theoretically and experimentally that tunnel currents in networks of disordered irregularly shaped nanoparticles (NPs) can be calculated by considering the networks as arrays of parallel nonlinear resistors. Each resistor is described by a one-dimensional or a two-dimensional array of equal size nanoparticles that the tunnel junction gaps between nanoparticles in each resistor is assumed to be equal. The number of tunnel junctions between two contact electrodes and the tunnel junction gaps between nanoparticles are found to be functions of Coulomb blockade energies. In addition, the tunnel barriers between nanoparticles were considered to be tilted at highmore » voltages. Furthermore, the role of thermal expansion coefficient of the tunnel junction gaps on the tunnel current is taken into account. The model calculations fit very well to the experimental data of a network of disordered gold nanoparticles, a forest of multi-wall carbon nanotubes, and a network of few-layer graphene nanoplates over a wide temperature range (5-300 K) at low and high DC bias voltages (0.001 mV–50 V). Our investigations indicate, although electron cotunneling in networks of disordered irregularly shaped NPs may occur, non-Arrhenius behavior at low temperatures cannot be described by the cotunneling model due to size distribution in the networks and irregular shape of nanoparticles. Non-Arrhenius behavior of the samples at zero bias voltage limit was attributed to the disorder in the samples. Unlike the electron cotunneling model, we found that the crossover from Arrhenius to non-Arrhenius behavior occurs at two temperatures, one at a high temperature and the other at a low temperature.« less

  3. Preparation and structural characterization of vulcanized natural rubber nanocomposites containing nickel-zinc ferrite nanopowders.

    PubMed

    Bellucci, F S; Salmazo, L O; Budemberg, E R; da Silva, M R; Rodríguez-Pérez, M A; Nobre, M A L; Job, A E

    2012-03-01

    Single-phase polycrystalline mixed nickel-zinc ferrites belonging to Ni0.5Zn0.5Fe2O4 were prepared on a nanometric scale (mean crystallite size equal to 14.7 nm) by chemical synthesis named the modified poliol method. Ferrite nanopowder was then incorporated into a natural rubber matrix producing nanocomposites. The samples were investigated by means of infrared spectroscopy, X-ray diffraction, scanning electron microscopy and magnetic measurements. The obtained results suggest that the base concentration of nickel-zinc ferrite nanoparticles inside the polymer matrix volume greatly influences the magnetic properties of nanocomposites. A small quantity of nanoparticles, less than 10 phr, in the nanocomposite is sufficient to produce a small alteration in the semi-crystallinity of nanocomposites observed by X-ray diffraction analysis and it produces a flexible magnetic composite material with a saturation magnetization, a coercivity field and an initial magnetic permeability equal to 3.08 emu/g, 99.22 Oe and 9.42 x 10(-5) respectively.

  4. 25 CFR 39.107 - Are schools allotted supplemental funds for special student and/or school costs?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... INTERIOR EDUCATION THE INDIAN SCHOOL EQUALIZATION PROGRAM Indian School Equalization Formula Base and... size §§ 39.140 through 39.156 Geographic isolation of the school § 39.160 Gifted and Talented Programs ...

  5. 25 CFR 39.107 - Are schools allotted supplemental funds for special student and/or school costs?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INTERIOR EDUCATION THE INDIAN SCHOOL EQUALIZATION PROGRAM Indian School Equalization Formula Base and... size §§ 39.140 through 39.156 Geographic isolation of the school § 39.160 Gifted and Talented Programs ...

  6. The structured ancestral selection graph and the many-demes limit.

    PubMed

    Slade, Paul F; Wakeley, John

    2005-02-01

    We show that the unstructured ancestral selection graph applies to part of the history of a sample from a population structured by restricted migration among subpopulations, or demes. The result holds in the limit as the number of demes tends to infinity with proportionately weak selection, and we have also made the assumptions of island-type migration and that demes are equivalent in size. After an instantaneous sample-size adjustment, this structured ancestral selection graph converges to an unstructured ancestral selection graph with a mutation parameter that depends inversely on the migration rate. In contrast, the selection parameter for the population is independent of the migration rate and is identical to the selection parameter in an unstructured population. We show analytically that estimators of the migration rate, based on pairwise sequence differences, derived under the assumption of neutrality should perform equally well in the presence of weak selection. We also modify an algorithm for simulating genealogies conditional on the frequencies of two selected alleles in a sample. This permits efficient simulation of stronger selection than was previously possible. Using this new algorithm, we simulate gene genealogies under the many-demes ancestral selection graph and identify some situations in which migration has a strong effect on the time to the most recent common ancestor of the sample. We find that a similar effect also increases the sensitivity of the genealogy to selection.

  7. Local sample thickness determination via scanning transmission electron microscopy defocus series.

    PubMed

    Beyer, A; Straubinger, R; Belz, J; Volz, K

    2016-05-01

    The usable aperture sizes in (scanning) transmission electron microscopy ((S)TEM) have significantly increased in the past decade due to the introduction of aberration correction. In parallel with the consequent increase of convergence angle the depth of focus has decreased severely and optical sectioning in the STEM became feasible. Here we apply STEM defocus series to derive the local sample thickness of a TEM sample. To this end experimental as well as simulated defocus series of thin Si foils were acquired. The systematic blurring of high resolution high angle annular dark field images is quantified by evaluating the standard deviation of the image intensity for each image of a defocus series. The derived dependencies exhibit a pronounced maximum at the optimum defocus and drop to a background value for higher or lower values. The full width half maximum (FWHM) of the curve is equal to the sample thickness above a minimum thickness given by the size of the used aperture and the chromatic aberration of the microscope. The thicknesses obtained from experimental defocus series applying the proposed method are in good agreement with the values derived from other established methods. The key advantages of this method compared to others are its high spatial resolution and that it does not involve any time consuming simulations. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  8. The Size Evolution of Passive Galaxies: Observations From the Wide-Field Camera 3 Early Release Science Program

    NASA Technical Reports Server (NTRS)

    Ryan, R. E., Jr.; Mccarthy, P.J.; Cohen, S. H.; Yan, H.; Hathi, N. P.; Koekemoer, A. M.; Rutkowski, M. J.; Mechtley, M. R.; Windhorst, R. A.; O’Connell, R. W.; hide

    2012-01-01

    We present the size evolution of passively evolving galaxies at z approximately 2 identified in Wide-Field Camera 3 imaging from the Early Release Science program. Our sample was constructed using an analog to the passive BzK galaxy selection criterion, which isolates galaxies with little or no ongoing star formation at z greater than approximately 1.5. We identify 30 galaxies in approximately 40 arcmin(sup 2) to H less than 25 mag. By fitting the 10-band Hubble Space Telescope photometry from 0.22 micrometers less than approximately lambda (sub obs) 1.6 micrometers with stellar population synthesis models, we simultaneously determine photometric redshift, stellar mass, and a bevy of other population parameters. Based on the six galaxies with published spectroscopic redshifts, we estimate a typical redshift uncertainty of approximately 0.033(1+z).We determine effective radii from Sersic profile fits to the H-band image using an empirical point-spread function. By supplementing our data with published samples, we propose a mass-dependent size evolution model for passively evolving galaxies, where the most massive galaxies (M(sub *) approximately 10(sup 11) solar mass) undergo the strongest evolution from z approximately 2 to the present. Parameterizing the size evolution as (1 + z)(sup - alpha), we find a tentative scaling of alpha approximately equals (-0.6 plus or minus 0.7) + (0.9 plus or minus 0.4) log(M(sub *)/10(sup 9 solar mass), where the relatively large uncertainties reflect the poor sampling in stellar mass due to the low numbers of highredshift systems. We discuss the implications of this result for the redshift evolution of the M(sub *)-R(sub e) relation for red galaxies.

  9. Change-in-ratio methods for estimating population size

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.; McCullough, Dale R.; Barrett, Reginald H.

    2002-01-01

    Change-in-ratio (CIR) methods can provide an effective, low cost approach for estimating the size of wildlife populations. They rely on being able to observe changes in proportions of population subclasses that result from the removal of a known number of individuals from the population. These methods were first introduced in the 1940’s to estimate the size of populations with 2 subclasses under the assumption of equal subclass encounter probabilities. Over the next 40 years, closed population CIR models were developed to consider additional subclasses and use additional sampling periods. Models with assumptions about how encounter probabilities vary over time, rather than between subclasses, also received some attention. Recently, all of these CIR models have been shown to be special cases of a more general model. Under the general model, information from additional samples can be used to test assumptions about the encounter probabilities and to provide estimates of subclass sizes under relaxations of these assumptions. These developments have greatly extended the applicability of the methods. CIR methods are attractive because they do not require the marking of individuals, and subclass proportions often can be estimated with relatively simple sampling procedures. However, CIR methods require a carefully monitored removal of individuals from the population, and the estimates will be of poor quality unless the removals induce substantial changes in subclass proportions. In this paper, we review the state of the art for closed population estimation with CIR methods. Our emphasis is on the assumptions of CIR methods and on identifying situations where these methods are likely to be effective. We also identify some important areas for future CIR research.

  10. Rheology of ice I at low stress and elevated confining pressure

    USGS Publications Warehouse

    Durham, W.B.; Stern, L.A.; Kirby, S.H.

    2001-01-01

    Triaxial compression testing of pure, polycrystalline water ice I at conditions relevant to planetary interiors and near-surface environments (differential stresses 0.45 to 10 MPa, temperatures 200 to 250 K, confining pressure 50 MPa) reveals that a complex variety of rheologies and grain structures may exist for ice and that rheology of ice appears to depend strongly on the grain structures. The creep of polycrystalline ice I with average grain size of 0.25 mm and larger is consistent with previously published dislocation creep laws, which are now extended to strain rates as low as 2 ?? 10-8s-1. When ice I is reduced to very fine and uniform grain size by rapid pressure release from the ice II stability field, the rheology changes dramatically. At 200 and 220 K the rheology matches the grain-size-sensitive rheology measured by Goldsby and Kohlstedt [1997, this issue] at 1 atm. This finding dispels concerns that the Goldsby and Kohlstedt results were influenced by mechanisms such as microfracturing and cavitation, processes not expected to operate at elevated pressures in planetary interiors. At 233 K and above, grain growth causes the fine-grained ice to become more creep resistant. Scanning electron microscopy investigation of some of these deformed samples shows that grains have markedly coarsened and the strain hardening can be modeled by normal grain growth and the Goldsby and Kohlstedt rheology. Several samples also displayed very heterogeneous grain sizes and high aspect ratio grain shapes. Grain-size-sensitive creep and dislocation creep coincidentally contribute roughly equal amounts of strain rate at conditions of stress, temperature, and grain size that are typical of terrestrial and planetary settings, so modeling ice dynamics in these settings must include both mechanisms. Copyright 2001 by the American Geophysical Union.

  11. Monitoring health interventions – who's afraid of LQAS?

    PubMed Central

    Pezzoli, Lorenzo; Kim, Sung Hye

    2013-01-01

    Lot quality assurance sampling (LQAS) is used to evaluate health services. Subunits of a population (lots) are accepted or rejected according to the number of failures in a random sample (N) of a given lot. If failures are greater than decision value (d), we reject the lot and recommend corrective actions in the lot (i.e. intervention area); if they are equal to or less than d, we accept it. We used LQAS to monitor coverage during the last 3 days of a meningitis vaccination campaign in Niger. We selected one health area (lot) per day reporting the lowest administrative coverage in the previous 2 days. In the sampling plan we considered: N to be small enough to allow us to evaluate one lot per day, deciding to sample 16 individuals from the selected villages of each health area, using probability proportionate to population size; thresholds and d to vary according to administrative coverage reported; α ≤5% (meaning that, if we would have conducted the survey 100 times, we would have accepted the lot up to five times when real coverage was at an unacceptable level) and β ≤20% (meaning that we would have rejected the lot up to 20 times, when real coverage was equal or above the satisfactory level). We classified all three lots as with the acceptable coverage. LQAS appeared to be a rapid, simple, and statistically sound method for in-process coverage assessment. We encourage colleagues in the field to consider using LQAS in complement with other monitoring techniques such as house-to-house monitoring. PMID:24206650

  12. Monitoring health interventions--who's afraid of LQAS?

    PubMed

    Pezzoli, Lorenzo; Kim, Sung Hye

    2013-11-08

    Lot quality assurance sampling (LQAS) is used to evaluate health services. Subunits of a population (lots) are accepted or rejected according to the number of failures in a random sample (N) of a given lot. If failures are greater than decision value (d), we reject the lot and recommend corrective actions in the lot (i.e. intervention area); if they are equal to or less than d, we accept it. We used LQAS to monitor coverage during the last 3 days of a meningitis vaccination campaign in Niger. We selected one health area (lot) per day reporting the lowest administrative coverage in the previous 2 days. In the sampling plan we considered: N to be small enough to allow us to evaluate one lot per day, deciding to sample 16 individuals from the selected villages of each health area, using probability proportionate to population size; thresholds and d to vary according to administrative coverage reported; α ≤5% (meaning that, if we would have conducted the survey 100 times, we would have accepted the lot up to five times when real coverage was at an unacceptable level) and β ≤20% (meaning that we would have rejected the lot up to 20 times, when real coverage was equal or above the satisfactory level). We classified all three lots as with the acceptable coverage. LQAS appeared to be a rapid, simple, and statistically sound method for in-process coverage assessment. We encourage colleagues in the field to consider using LQAS in complement with other monitoring techniques such as house-to-house monitoring.

  13. The Effects of Specimen Geometry and Size on the Dynamic Failure of Aluminum Alloy 2219-T8 Under Impact Loading

    NASA Astrophysics Data System (ADS)

    Bolling, Denzell Tamarcus

    A significant amount of research has been devoted to the characterization of new engineering materials. Searching for new alloys which may improve weight, ultimate strength, or fatigue life are just a few of the reasons why researchers study different materials. In support of that mission this study focuses on the effects of specimen geometry and size on the dynamic failure of AA2219 aluminum alloy subjected to impact loading. Using the Split Hopkinson Pressure Bar (SHPB) system different geometric samples including cubic, rectangular, cylindrical, and frustum samples are loaded at different strain rates ranging from 1000s-1 to 6000s-1. The deformation properties, including the potential for the formation of adiabatic shear bands, of the different geometries are compared. Overall the cubic geometry achieves the highest critical strain and the maximum stress values at low strain rates and the rectangular geometry has the highest critical strain and the maximum stress at high strain rates. The frustum geometry type consistently achieves the lowest the maximum stress value compared to the other geometries under equal strain rates. All sample types clearly indicated susceptibility to strain localization at different locations within the sample geometry. Micrograph analysis indicated that adiabatic shear band geometry was influenced by sample geometry, and that specimens with a circular cross section are more susceptible to shear band formation than specimens with a rectangular cross section.

  14. Metabolism of triglyceride-rich lipoproteins and transfer of lipids to high-density lipoproteins (HDL) in vegan and omnivore subjects.

    PubMed

    Vinagre, J C; Vinagre, C G; Pozzi, F S; Slywitch, E; Maranhão, R C

    2013-01-01

    Vegan diet excludes all foodstuffs of animal origin and leads to cholesterol lowering and possibly reduction of cardiovascular disease risk. The aim was to investigate whether vegan diet improves the metabolic pathway of triglyceride-rich lipoproteins, consisting in lipoprotein lipolysis and removal from circulation of the resulting remnants and to verify whether the diet alters HDL metabolism by changing lipid transfers to this lipoprotein. 21 vegan and 29 omnivores eutrophic and normolipidemic subjects were intravenously injected triglyceride-rich emulsions labeled with (14)C-cholesterol oleate and (3)H-triolein: fractional clearance rates (FCR, in min(-1)) were calculated from samples collected during 60 min for radioactive counting. Lipid transfer to HDL was assayed by incubating plasma samples with a donor nanoemulsion labeled with radioactive lipids; % lipids transferred to HDL were quantified in supernatant after chemical precipitation of non-HDL fractions and nanoemulsion. Serum LDL cholesterol was lower in vegans than in omnivores (2.1 ± 0.8, 2.7 ± 0.7 mmol/L, respectively, p < 0,05), but HDL cholesterol and triglycerides were equal. Cholesteryl ester FCR was greater in vegans than in omnivores (0.016 ± 0.012, 0.003 ± 0.003, p < 0.01), whereas triglyceride FCR was equal (0.024 ± 0.014, 0.030 ± 0.016, N.S.). Cholesteryl ester transfer to HDL was lower in vegans than in omnivores (2.7 ± 0.6, 3.5 ± 1.5%, p < 0,05). Free-cholesterol, triglyceride and phospholipid transfer were equal, as well as HDL size. Remnant removal from circulation, estimated by cholesteryl oleate FCR was faster in vegans, but the lipolysis process, estimated by triglyceride FCR was equal. Increased removal of atherogenic remnants and diminution of cholesteryl ester transfer may favor atherosclerosis prevention by vegan diet. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. On the mechanism of pulsed laser ablation of phthalocyanine nanoparticles in an aqueous medium

    NASA Astrophysics Data System (ADS)

    Kogan, Boris; Malimonenko, Nicholas; Butenin, Alexander; Novoseletsky, Nicholas; Chizhikov, Sergei

    2018-06-01

    Laser ablation of phthalocyanine nanoparticles has potential for cancer treatment. The ablation is accompanied by the formation of microbubbles and the sublimation of nanoparticles. This was investigated in a liquid medium simulating tissue using optical-acoustic and spectral-luminescent methods. The thresholds for the appearance of microbubbles have been determined as a function of nanoparticle size. For the minimal size particles (80 nm) this threshold is equal to about 20–25 mJ cm‑2 and for the maximal size particles (230 nm) this threshold is equal to about 7 mJ cm‑2. It was estimated that the particle temperature at which bubbles arise is near 145 °С.

  16. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    PubMed

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in their global properties. This apparent paradox is a consequence of the small numbers of simultaneously recorded neurons in experiment: when inferred via small sample sizes, many networks may be indistinguishable despite being globally distinct. We develop a connectivity measure that successfully classifies networks even when estimated locally with a few neurons at a time. We show that data from rat cortex is consistent with a network in which the likelihood of a connection between neurons depends on spatial distance and on nonspatial, asymmetric clustering. Copyright © 2017 the authors 0270-6474/17/378498-13$15.00/0.

  17. Light-scattering efficiency of starch acetate pigments as a function of size and packing density.

    PubMed

    Penttilä, Antti; Lumme, Kari; Kuutti, Lauri

    2006-05-20

    We study theoretically the light-scattering efficiency of paper coatings made of starch acetate pigments. For the light-scattering code we use a discrete dipole approximation method. The coating layer is assumed to consists of roughly equal-sized spherical pigments packed either at a packing density of 50% (large cylindrical slabs) or at 37% or 57% (large spheres). Because the scanning electron microscope images of starch acetate samples show either a particulate or a porous structure, we model the coatings in two complementary ways. The material can be either inside the constituent spheres (particulate case) or outside of those (cheeselike, porous medium). For the packing of our spheres we use either a simulated annealing or a dropping code. We can estimate, among other things, that the ideal sphere diameter is in the range 0.25-0.4 microm.

  18. Light-scattering efficiency of starch acetate pigments as a function of size and packing density

    NASA Astrophysics Data System (ADS)

    Penttilä, Antti; Lumme, Kari; Kuutti, Lauri

    2006-05-01

    We study theoretically the light-scattering efficiency of paper coatings made of starch acetate pigments. For the light-scattering code we use a discrete dipole approximation method. The coating layer is assumed to consists of roughly equal-sized spherical pigments packed either at a packing density of 50% (large cylindrical slabs) or at 37% or 57% (large spheres). Because the scanning electron microscope images of starch acetate samples show either a particulate or a porous structure, we model the coatings in two complementary ways. The material can be either inside the constituent spheres (particulate case) or outside of those (cheeselike, porous medium). For the packing of our spheres we use either a simulated annealing or a dropping code. We can estimate, among other things, that the ideal sphere diameter is in the range 0.25-0.4 μm.

  19. Food intake and growth of Sarsia tubulosa (SARS, 1835), with quantitative estimates of predation on copepod populations

    NASA Astrophysics Data System (ADS)

    Daan, Rogier

    In laboratory tests food intake by the hydromedusa Sarsia tubulosa, which feeds on copepods, was quantified. Estimates of maximum predation are presented for 10 size classes of Sarsia. Growth rates, too, were determined in the laboratory, at 12°C under ad libitum food conditions. Mean gross food conversion for all size classes averaged 12%. From the results of a frequent sampling programme, carried out in the Texelstroom (a tidal inlet of the Dutch Wadden Sea) in 1983, growth rates of Sarsia in the field equalled maximum growth under experimental conditions, which suggests that Sarsia in situ can feed at an optimum level. Two estimates of predation pressure in the field matched very closely and lead to the conclusion that the impact of Sarsia predation on copepod standing stocks in the Dutch coastal area, including the Wadden Sea, is generally negligible.

  20. An experimental study of reactive turbulent mixing

    NASA Technical Reports Server (NTRS)

    Cooper, L. P.; Marek, C. J.; Strehlow, R. A.

    1977-01-01

    An experimental study of two coaxial gas streams, which react very rapidly, was performed to investigate the mixing characteristics of turbulent flow fields. The center stream consisted of a CO-N2 mixture and the outer annular stream consisted of air vitiated by H2 combustion. The streams were at equal velocity (50 m/sec) and temperature (1280 K). Turbulence measurements were obtained using hot film anemometry. A sampling probe was used to obtain time averaged gas compositions. Six different turbulence generators were placed in the annular passage to alter the flow field mixing characteristics. The turbulence generators affected the bulk mixing of the streams and the extent of CO conversion to different degrees. The effects can be related to the average eddy size (integral scale) and the bulk mixing. Higher extents of conversion of CO to CO2 were found be increasing the bulk mixing and decreasing the average eddy size.

  1. MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes

    NASA Technical Reports Server (NTRS)

    Allen, Carlton C.

    2011-01-01

    The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by mass of each sample should be preserved to support future scientific investigations. Samples of 15-16 grams are considered optimal. The total mass of returned rocks, soils, blanks and standards should be approximately 500 grams. Atmospheric gas samples should be the equivalent of 50 cubic cm at 20 times Mars ambient atmospheric pressure.

  2. O the Size Dependence of the Chemical Properties of Cloud Droplets: Exploratory Studies by Aircraft

    NASA Astrophysics Data System (ADS)

    Twohy, Cynthia H.

    1992-09-01

    Clouds play an important role in the climate of the earth and in the transport and transformation of chemical species, but many questions about clouds remain unanswered. In particular, the chemical properties of droplets may vary with droplet size, with potentially important consequences. The counterflow virtual impactor (CVI) separates droplets from interstitial particles and gases in a cloud and also can collect droplets in discrete size ranges. As such, the CVI is a useful tool for investigating the chemical components present in droplets of different sizes and their potential interactions with cloud processes. The purpose of this work is twofold. First, the sampling characteristics of the airborne CVI are investigated, using data from a variety of experiments. A thorough understanding of CVI properties is necessary in order to utilize the acquired data judiciously and effectively. Although the impaction characteristics of the CVI seem to be predictable by theory, the airborne instrument is subject to influences that may result in a reduced transmission efficiency for droplets, particularly if the inlet is not properly aligned. Ways to alleviate this problem are being investigated, but currently the imperfect sampling efficiency must be taken into account during data interpretation. Relationships between the physical and chemical properties of residual particles from droplets collected by the CVI and droplet size are then explored in both stratiform and cumulus clouds. The effects of various cloud processes and measurement limitations upon these relationships are discussed. In one study, chemical analysis of different -sized droplets sampled in stratiform clouds showed a dependence of chemical composition on droplet size, with larger droplets containing higher proportions of sodium than non-sea-salt sulfate and ammonium. Larger droplets were also associated with larger residual particles, as expected from simple cloud nucleation theory. In a study of marine cumulus clouds, the CVI was combined with a cloud condensation nucleus spectrometer to study the supersaturation spectra of residual particles from droplets. The median critical supersaturation of the droplet residual particles was consistently less than or equal to the median critical supersaturation of ambient particles except at cloud top, where residual particles exhibited a variety of critical supersaturations.

  3. Sampling effort and estimates of species richness based on prepositioned area electrofisher samples

    USGS Publications Warehouse

    Bowen, Z.H.; Freeman, Mary C.

    1998-01-01

    Estimates of species richness based on electrofishing data are commonly used to describe the structure of fish communities. One electrofishing method for sampling riverine fishes that has become popular in the last decade is the prepositioned area electrofisher (PAE). We investigated the relationship between sampling effort and fish species richness at seven sites in the Tallapoosa River system, USA based on 1,400 PAE samples collected during 1994 and 1995. First, we estimated species richness at each site using the first-order jackknife and compared observed values for species richness and jackknife estimates of species richness to estimates based on historical collection data. Second, we used a permutation procedure and nonlinear regression to examine rates of species accumulation. Third, we used regression to predict the number of PAE samples required to collect the jackknife estimate of species richness at each site during 1994 and 1995. We found that jackknife estimates of species richness generally were less than or equal to estimates based on historical collection data. The relationship between PAE electrofishing effort and species richness in the Tallapoosa River was described by a positive asymptotic curve as found in other studies using different electrofishing gears in wadable streams. Results from nonlinear regression analyses indicted that rates of species accumulation were variable among sites and between years. Across sites and years, predictions of sampling effort required to collect jackknife estimates of species richness suggested that doubling sampling effort (to 200 PAEs) would typically increase observed species richness by not more than six species. However, sampling effort beyond about 60 PAE samples typically increased observed species richness by < 10%. We recommend using historical collection data in conjunction with a preliminary sample size of at least 70 PAE samples to evaluate estimates of species richness in medium-sized rivers. Seventy PAE samples should provide enough information to describe the relationship between sampling effort and species richness and thus facilitate evaluation of a sampling effort.

  4. Aggregate distribution and associated organic carbon influenced by cover crops

    NASA Astrophysics Data System (ADS)

    Barquero, Irene; García-González, Irene; Benito, Marta; Gabriel, Jose Luis; Quemada, Miguel; Hontoria, Chiquinquirá

    2013-04-01

    Replacing fallow with cover crops during the non-cropping period seems to be a good alternative to diminish soil degradation by enhancing soil aggregation and increasing organic carbon. The aim of this study was to analyze the effect of replacing fallow by different winter cover crops (CC) on the aggregate distribution and C associated of an Haplic Calcisol. The study area was located in Central Spain, under semi-arid Mediterranean climate. A 4-year field trial was conducted using Barley (Hordeum vulgare L.) and Vetch (Vicia sativa L.) as CC during the intercropping period of maize (Zea mays L.) under irrigation. All treatments were equally irrigated and fertilized. Maize was directly sown over CC residues previously killed in early spring. Composite samples were collected at 0-5 and 5-20 cm depths in each treatment on autumn of 2010. Soil samples were separated by wet sieving into four aggregate-size classes: large macroaggregates ( >2000 µm); small macroaggregates (250-2000 µm); microaggregates (53-250 µm); and < 53 µm (silt + clay size). Organic carbon associated to each aggregate-size class was measured by Walkley-Black Method. Our preliminary results showed that the aggregate-size distribution was dominated by microaggregates (48-53%) and the <53 µm fraction (40-44%) resulting in a low mean weight diameter (MWD). Both cover crops increased aggregate size resulting in a higher MWD (0.28 mm) in comparison with fallow (0.20 mm) in the 0-5 cm layer. Barley showed a higher MWD than fallow also in 5-20 cm layer. Organic carbon concentrations in aggregate-size classes at top layer followed the order: large macroaggregates > small macroaggregates > microaggregates > silt + clay size. Treatments did not influence C concentration in aggregate-size classes. In conclusion, cover crops improved soil structure increasing the proportion of macroaggregates and MWD being Barley more effective than Vetch at subsurface layer.

  5. The X-Ray Luminosity Functions of Field Low-Mass X-Ray Binaries in Early-Type Galaxies: Evidence for a Stellar Age Dependence

    NASA Technical Reports Server (NTRS)

    Lehmer, B. D.; Berkeley, M.; Zezas, A.; Alexander, D. M.; Basu-Zych, A.; Bauer, F. E.; Brandt, W. N.; Fragos, T.; Hornschemeier, A. E.; Kalogera, V.; hide

    2014-01-01

    We present direct constraints on how the formation of low-mass X-ray binary (LMXB) populations in galactic fields depends on stellar age. In this pilot study, we utilize Chandra and Hubble Space Telescope (HST) data to detect and characterize the X-ray point source populations of three nearby early-type galaxies: NGC 3115, 3379, and 3384. The luminosity-weighted stellar ages of our sample span approximately equal to 3-10 Gyr. X-ray binary population synthesis models predict that the field LMXBs associated with younger stellar populations should be more numerous and luminous per unit stellar mass than older populations due to the evolution of LMXB donor star masses. Crucially, the combination of deep Chandra and HST observations allows us to test directly this prediction by identifying and removing counterparts to X-ray point sources that are unrelated to the field LMXB populations, including LMXBs that are formed dynamically in globular clusters, Galactic stars, and background AGN/galaxies. We find that the "young" early-type galaxy NGC 3384 (approximately equals 2-5 Gyr) has an excess of luminous field LMXBs (L(sub x) approximately greater than (5-10) × 10(exp 37) erg s(exp -1)) per unit K-band luminosity (L(sub K); a proxy for stellar mass) than the "old" early-type galaxies NGC 3115 and 3379 (approximately equals 8-10 Gyr), which results in a factor of 2-3 excess of L(sub X)/L(sub K) for NGC 3384. This result is consistent with the X-ray binary population synthesis model predictions; however, our small galaxy sample size does not allow us to draw definitive conclusions on the evolution field LMXBs in general. We discuss how future surveys of larger galaxy samples that combine deep Chandra and HST data could provide a powerful new benchmark for calibrating X-ray binary population synthesis models.

  6. 40 CFR 85.2224 - Exhaust analysis system-EPA 81.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... probe, moisture separator and analyzers for HC and CO. (2) Dual sample probe requirements. If used, a dual sample probe must provide equal flow in each leg. The equal flow criterion is considered to be met if the flow rate in each leg of the probe (or an identical model) has been measured under two sample...

  7. 40 CFR 85.2224 - Exhaust analysis system-EPA 81.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... probe, moisture separator and analyzers for HC and CO. (2) Dual sample probe requirements. If used, a dual sample probe must provide equal flow in each leg. The equal flow criterion is considered to be met if the flow rate in each leg of the probe (or an identical model) has been measured under two sample...

  8. 40 CFR 85.2224 - Exhaust analysis system-EPA 81.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... probe, moisture separator and analyzers for HC and CO. (2) Dual sample probe requirements. If used, a dual sample probe must provide equal flow in each leg. The equal flow criterion is considered to be met if the flow rate in each leg of the probe (or an identical model) has been measured under two sample...

  9. Cephalopod embryonic shells as a tool to reconstruct reproductive strategies in extinct taxa.

    PubMed

    Laptikhovsky, Vladimir; Nikolaeva, Svetlana; Rogov, Mikhail

    2018-02-01

    An exhaustive study of existing data on the relationship between egg size and maximum size of embryonic shells in 42 species of extant cephalopods demonstrated that these values are approximately equal regardless of taxonomy and shell morphology. Egg size is also approximately equal to mantle length of hatchlings in 45 cephalopod species with rudimentary shells. Paired data on the size of the initial chamber versus embryonic shell in 235 species of Ammonoidea, 46 Bactritida, 13 Nautilida, 22 Orthocerida, 8 Tarphycerida, 4 Oncocerida, 1 Belemnoidea, 4 Sepiida and 1 Spirulida demonstrated that, although there is a positive relationship between these parameters in some taxa, initial chamber size cannot be used to predict egg size in extinct cephalopods; the size of the embryonic shell may be more appropriate for this task. The evolution of reproductive strategies in cephalopods in the geological past was marked by an increasing significance of small-egged taxa, as is also seen in simultaneously evolving fish taxa. © 2017 The Authors. Biological Reviews published by John Wiley & Sons Ltd on behalf of Cambridge Philosophical Society.

  10. Simulation-based power calculation for designing interrupted time series analyses of health policy interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Ross-Degnan, Dennis

    2011-11-01

    Interrupted time series is a strong quasi-experimental research design to evaluate the impacts of health policy interventions. Using simulation methods, we estimated the power requirements for interrupted time series studies under various scenarios. Simulations were conducted to estimate the power of segmented autoregressive (AR) error models when autocorrelation ranged from -0.9 to 0.9 and effect size was 0.5, 1.0, and 2.0, investigating balanced and unbalanced numbers of time periods before and after an intervention. Simple scenarios of autoregressive conditional heteroskedasticity (ARCH) models were also explored. For AR models, power increased when sample size or effect size increased, and tended to decrease when autocorrelation increased. Compared with a balanced number of study periods before and after an intervention, designs with unbalanced numbers of periods had less power, although that was not the case for ARCH models. The power to detect effect size 1.0 appeared to be reasonable for many practical applications with a moderate or large number of time points in the study equally divided around the intervention. Investigators should be cautious when the expected effect size is small or the number of time points is small. We recommend conducting various simulations before investigation. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Scanning in situ Spectroscopy platform for imaging surgical breast tissue specimens

    PubMed Central

    Krishnaswamy, Venkataramanan; Laughney, Ashley M.; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.

    2013-01-01

    A non-contact localized spectroscopic imaging platform has been developed and optimized to scan 1x1cm2 square regions of surgically resected breast tissue specimens with ~150-micron resolution. A color corrected, image-space telecentric scanning design maintained a consistent sampling geometry and uniform spot size across the entire imaging field. Theoretical modeling in ZEMAX allowed estimation of the spot size, which is equal at both the center and extreme positions of the field with ~5% variation across the designed waveband, indicating excellent color correction. The spot sizes at the center and an extreme field position were also measured experimentally using the standard knife-edge technique and were found to be within ~8% of the theoretical predictions. Highly localized sampling offered inherent insensitivity to variations in background absorption allowing direct imaging of local scattering parameters, which was validated using a matrix of varying concentrations of Intralipid and blood in phantoms. Four representative, pathologically distinct lumpectomy tissue specimens were imaged, capturing natural variations in tissue scattering response within a given pathology. Variations as high as 60% were observed in the average reflectance and relative scattering power images, which must be taken into account for robust classification performance. Despite this variation, the preliminary data indicates discernible scatter power contrast between the benign vs malignant groups, but reliable discrimination of pathologies within these groups would require investigation into additional contrast mechanisms. PMID:23389199

  12. Mineralogy and characterization of deposited particles of the aero sediments collected in the vicinity of power plants and the open pit coal mine: Kolubara (Serbia).

    PubMed

    Cvetković, Željko; Logar, Mihovil; Rosić, Aleksandra

    2013-05-01

    In this paper, particular attention was paid to the presence of aerosol solid particles, which occurred mainly as a result of exploitation and coal combustion in the thermal power plants of the Kolubara basin. Not all of the particles created by this type of anthropogenic pollution have an equal impact on human health, but it largely depends on their size and shape. The mineralogical composition and particle size distribution in the samples of aero sediments were defined. The samples were collected close to the power plant and open pit coal mine, in the winter and summer period during the year 2007. The sampling was performed by using precipitators placed in eight locations within the territory of the Lazarevac municipality. In order to characterize the sedimentary particles, several methods were applied: microscopy, SEM-EDX and X-ray powder diffraction. The concentration of aero sediments was also determined during the test period. Variety in the mineralogical composition and particle size depends on the position of the measuring sites, geology of the locations, the annual period of collecting as well as possible interactions. By applying the mentioned methods, the presence of inhalational and respiratory particles variously distributed in the winter and in the summer period was established. The most common minerals are quartz and feldspar. The presence of gypsum, clay minerals, calcite and dolomite as secondary minerals was determined, as well as the participation of organic and inorganic amorphic matter. The presence of quartz as a toxic mineral has a particular impact on human health.

  13. Evaluating change in bruise colorimetry and the effect of subject characteristics over time.

    PubMed

    Scafide, Katherine R N; Sheridan, Daniel J; Campbell, Jacquelyn; Deleon, Valerie B; Hayat, Matthew J

    2013-09-01

    Forensic clinicians are routinely asked to estimate the age of cutaneous bruises. Unfortunately, existing research on noninvasive methods to date bruises has been mostly limited to relatively small, homogeneous samples or cross-sectional designs. The purpose of this prospective, foundational study was to examine change in bruise colorimetry over time and evaluate the effects of bruise size, skin color, gender, and local subcutaneous fat on that change. Bruises were created by a controlled application of a paintball pellet to 103 adult, healthy volunteers. Daily colorimetry measures were obtained for four consecutive days using the Minolta Chroma-meter(®). The sample was nearly equal by gender and skin color (light, medium, dark). Analysis included general linear mixed modeling (GLMM). Change in bruise colorimetry over time was significant for all three color parameters (L*a*b*), the most notable changes being the decrease in red (a*) and increase in yellow (b*) starting at 24 h. Skin color was a significant predictor for all three colorimetry values but sex or subcutaneous fat levels were not. Bruise size was a significant predictor and moderator and may have accounted for the lack of effect of gender or subcutaneous fat. Study results demonstrated the ability to model the change in bruise colorimetry over time in a diverse sample of healthy adults. Multiple factors, including skin color and bruise size must be considered when assessing bruise color in relation to its age. This study supports the need for further research that could build the science to allow more accurate bruise age estimations.

  14. Female reproductive success variation in a Pseudotsuga menziesii seed orchard as revealed by pedigree reconstruction from a bulk seed collection.

    PubMed

    El-Kassaby, Yousry A; Funda, Tomas; Lai, Ben S K

    2010-01-01

    The impact of female reproductive success on the mating system, gene flow, and genetic diversity of the filial generation was studied using a random sample of 801 bulk seed from a 49-clone Pseudotsuga menziesii seed orchard. We used microsatellite DNA fingerprinting and pedigree reconstruction to assign each seed's maternal and paternal parents and directly estimated clonal reproductive success, selfing rate, and the proportion of seed sired by outside pollen sources. Unlike most family array mating system and gene flow studies conducted on natural and experimental populations, which used an equal number of seeds per maternal genotype and thus generating unbiased inferences only on male reproductive success, the random sample we used was a representative of the entire seed crop; therefore, provided a unique opportunity to draw unbiased inferences on both female and male reproductive success variation. Selfing rate and the number of seed sired by outside pollen sources were found to be a function of female fertility variation. This variation also substantially and negatively affected female effective population size. Additionally, the results provided convincing evidence that the use of clone size as a proxy to fertility is questionable and requires further consideration.

  15. Interactive Video Gaming compared to Health Education in Older Adults with MCI: A Feasibility Study

    PubMed Central

    Hughes, Tiffany F.; Flatt, Jason D.; Fu, Bo; Butters, Meryl A.; Chang, Chung-Chou H.; Ganguli, Mary

    2014-01-01

    Objective We evaluated the feasibility of a trial of Wii interactive video gaming, and its potential efficacy at improving cognitive functioning compared to health education, in a community sample of older adults with neuropsychologically defined mild cognitive impairment (MCI). Methods Twenty older adults were equally randomized to either group-based interactive video gaming or health education for 90 minutes each week for 24 weeks. Although the primary outcomes were related to study feasibility, we also explored the effect of the intervention on neuropsychological performance and other secondary outcomes. Results All 20 participants completed the intervention, and 18 attended at least 80% of the sessions. The majority (80%) of participants were “very much” satisfied with the intervention. Bowling was enjoyed by the most participants, and was also the rated highest among the games for mental, social and physical stimulation. We observed medium effect sizes for cognitive and physical functioning in favor of the interactive video gaming condition, but these effects were not statistically significant in this small sample. Conclusion Interactive video gaming is feasible for older adults with MCI and medium effects sizes in favor of the Wii group warrant a larger efficacy trial. PMID:24452845

  16. Evaluating the Validity Indices of the Personality Assessment Inventory-Adolescent Version.

    PubMed

    Meyer, Justin K; Hong, Sang-Hwang; Morey, Leslie C

    2015-08-01

    Past research has established strong psychometric properties of several indicators of response distortion on the Personality Assessment Inventory (PAI). However, to date, it has been unclear whether the response distortion indicators of the adolescent version of the PAI (PAI-A) operate in an equally valid manner. The current study sought to examine several response distortion indicators on the PAI-A to determine their relative efficacy at the detection of distorted responding, including both positive distortion and negative distortion. Protocols of 98 college students asked to either overreport or underreport were compared with 98 age-matched individuals sampled from the clinical standardization sample and the community standardization sample, respectively. Comparisons between groups were accomplished through the examination of effect sizes and receiver operating characteristic curves. All indicators demonstrated the ability to distinguish between actual and feigned responding, including several newly developed indicators. This study provides support for the ability of distortion indicators developed for the PAI to also function appropriately on the PAI-A. © The Author(s) 2014.

  17. Dielectric Relaxation In Complex Perovskite Sm(Ni{sub 1/2}Ti{sub 1/2})O{sub 3}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Nishant; Prasad, S.; Sinha, T. P.

    2011-11-22

    The complex perovskite oxide Samarium nickel titenate, Sm(Ni{sub 1/2}Ti{sub 1/2})O{sub 3}(SNT) is synthesized by a solid-state reaction technique. The X-ray diffraction of the sample at room temperature shows a monoclinic phase. The scanning micrograph of the sample shows the average grain size{approx_equal}0.6{mu}m The field dependence of dielectric response and the loss tangent of the sample are measured in a frequency range from 100Hz to 1MHz and in a temperature range from 313 K to 673 K. An analysis of the real and imaginary parts of the dielectric permittivity with frequency is performed, assuming a distribution of relaxation times as confirmedmore » by Cole-Cole plots. The frequency dependent electrical data are analyzed in the framework of conductivity formalism. The frequency dependent conductivity data are fitted to the universal power law. All these formalisms provided for qualitative similarities in the relaxation times.« less

  18. Surface Acoustic Wave Nebulisation Mass Spectrometry for the Fast and Highly Sensitive Characterisation of Synthetic Dyes in Textile Samples

    NASA Astrophysics Data System (ADS)

    Astefanei, Alina; van Bommel, Maarten; Corthals, Garry L.

    2017-10-01

    Surface acoustic wave nebulisation (SAWN) mass spectrometry (MS) is a method to generate gaseous ions compatible with direct MS of minute samples at femtomole sensitivity. To perform SAWN, acoustic waves are propagated through a LiNbO3 sampling chip, and are conducted to the liquid sample, which ultimately leads to the generation of a fine mist containing droplets of nanometre to micrometre diameter. Through fission and evaporation, the droplets undergo a phase change from liquid to gaseous analyte ions in a non-destructive manner. We have developed SAWN technology for the characterisation of organic colourants in textiles. It generates electrospray-ionisation-like ions in a non-destructive manner during ionisation, as can be observed by the unmodified chemical structure. The sample size is decreased by tenfold to 1000-fold when compared with currently used liquid chromatography-MS methods, with equal or better sensitivity. This work underscores SAWN-MS as an ideal tool for molecular analysis of art objects as it is non-destructive, is rapid, involves minimally invasive sampling and is more sensitive than current MS-based methods. [Figure not available: see fulltext.

  19. Drying of Floodplain Forests Associated with Water-Level Decline in the Apalachicola River, Florida - Interim Results, 2006

    USGS Publications Warehouse

    Darst, Melanie R.; Light, Helen M.

    2007-01-01

    Floodplain forests of the Apalachicola River, Florida, are drier in composition today (2006) than they were before 1954, and drying is expected to continue for at least the next 50 years. Drier forest composition is probably caused by water-level declines that occurred as a result of physical changes in the main channel after 1954 and decreased flows in spring and summer months since the 1970s. Forest plots sampled from 2004 to 2006 were compared to forests sampled in the late 1970s (1976-79) using a Floodplain Index (FI) based on species dominance weighted by the Floodplain Species Category, a value that represents the tolerance of tree species to inundation and saturation in the floodplain and consequently, the typical historic floodplain habitat for that species. Two types of analyses were used to determine forest changes over time: replicate plot analysis comparing present (2004-06) canopy composition to late 1970s canopy composition at the same locations, and analyses comparing the composition of size classes of trees on plots in late 1970s and in present forests. An example of a size class analysis would be a comparison of the composition of the entire canopy (all trees greater than 7.5 cm (centimeter) diameter at breast height (dbh)) to the composition of the large canopy tree size class (greater than or equal to 25 cm dbh) at one location. The entire canopy, which has a mixture of both young and old trees, is probably indicative of more recent hydrologic conditions than the large canopy, which is assumed to have fewer young trees. Change in forest composition from the pre-1954 period to approximately 2050 was estimated by combining results from three analyses. The composition of pre-1954 forests was represented by the large canopy size class sampled in the late 1970s. The average FI for canopy trees was 3.0 percent drier than the average FI for the large canopy tree size class, indicating that the late 1970s forests were 3.0 percent drier than pre-1954 forests. The change from the late 1970s to the present was based on replicate plot analysis. The composition of 71 replicate plots sampled from 2004 to 2006 averaged 4.4 percent drier than forests sampled in the late 1970s. The potential composition of future forests (2050 or later) was estimated from the composition of the present subcanopy tree size class (less than 7.5 cm and greater than or equal to 2.5 cm dbh), which contains the greatest percentage of young trees and is indicative of recent hydrologic conditions. Subcanopy trees are the driest size class in present forests, with FIs averaging 31.0 percent drier than FIs for all canopy trees. Based on results from all three sets of data, present floodplain forests average 7.4 percent drier in composition than pre-1954 forests and have the potential to become at least 31.0 percent drier in the future. An overall total change in floodplain forests to an average composition 38.4 percent drier than pre-1954 forests is expected within approximately 50 years. The greatest effects of water-level decline have occurred in tupelo-cypress swamps where forest composition has become at least 8.8 percent drier in 2004-06 than in pre-1954 years. This change indicates that a net loss of swamps has already occurred in the Apalachicola River floodplain, and further losses are expected to continue over the next 50 years. Drying of floodplain forests will result in some low bottomland hardwood forests changing in composition to high bottomland hardwood forests. The composition of high bottomland hardwoods will also change, although periodic flooding is still occurring and will continue to limit most of the floodplain to bottomland hardwood species that are adapted to at least short periods of inundation and saturation.

  20. Brazilian Soybean Yields and Yield Gaps Vary with Farm Size

    NASA Astrophysics Data System (ADS)

    Jeffries, G. R.; Cohn, A.; Griffin, T. S.; Bragança, A.

    2017-12-01

    Understanding the farm size-specific characteristics of crop yields and yield gaps may help to improve yields by enabling better targeting of technical assistance and agricultural development programs. Linking remote sensing-based yield estimates with property boundaries provides a novel view of the relationship between farm size and yield structure (yield magnitude, gaps, and stability over time). A growing literature documents variations in yield gaps, but largely ignores the role of farm size as a factor shaping yield structure. Research on the inverse farm size-productivity relationship (IR) theory - that small farms are more productive than large ones all else equal - has documented that yield magnitude may vary by farm size, but has not considered other yield structure characteristics. We examined farm size - yield structure relationships for soybeans in Brazil for years 2001-2015. Using out-of-sample soybean yield predictions from a statistical model, we documented 1) gaps between the 95th percentile of attained yields and mean yields within counties and individual fields, and 2) yield stability defined as the standard deviation of time-detrended yields at given locations. We found a direct relationship between soy yields and farm size at the national level, while the strength and the sign of the relationship varied by region. Soybean yield gaps were found to be inversely related to farm size metrics, even when yields were only compared to farms of similar size. The relationship between farm size and yield stability was nonlinear, with mid-sized farms having the most stable yields. The work suggests that farm size is an important factor in understanding yield structure and that opportunities for improving soy yields in Brazil are greatest among smaller farms.

  1. Do icon arrays help reduce denominator neglect?

    PubMed

    Garcia-Retamero, Rocio; Galesic, Mirta; Gigerenzer, Gerd

    2010-01-01

    Denominator neglect is the focus on the number of times a target event has happened (e.g., the number of treated and nontreated patients who die) without considering the overall number of opportunities for it to happen (e.g., the overall number of treated and nontreated patients). In 2 studies, we addressed the effect of denominator neglect in problems involving treatment risk reduction where samples of treated and non-treated patients and the relative risk reduction were of different sizes. We also tested whether using icon arrays helps people take these different sample sizes into account. We especially focused on older adults, who are often more disadvantaged when making decisions about their health. . Study 1 was conducted on a laboratory sample using a within-subjects design; study 2 was conducted on a nonstudent sample interviewed through the Web using a between-subjects design. Accuracy of understanding risk reduction. Participants often paid too much attention to numerators and insufficient attention to denominators when numerical information about treatment risk reduction was provided. Adding icon arrays to the numerical information, however, drew participants' attention to the denominators and helped them make more accurate assessments of treatment risk reduction. Icon arrays were equally helpful to younger and older adults. Building on previous research showing that problems with understanding numerical information often do not reside in the mind but in the representation of the problem, the results show that icon arrays are an effective method of eliminating denominator neglect.

  2. Liquid chromatography-electrospray ionization tandem mass spectrometry and dynamic multiple reaction monitoring method for determining multiple pesticide residues in tomato.

    PubMed

    Andrade, G C R M; Monteiro, S H; Francisco, J G; Figueiredo, L A; Botelho, R G; Tornisielo, V L

    2015-05-15

    A quick and sensitive liquid chromatography-electrospray ionization tandem mass spectrometry method, using dynamic multiple reaction monitoring and a 1.8-μm particle size analytical column, was developed to determine 57 pesticides in tomato in a 13-min run. QuEChERS (quick, easy, cheap, effective, rugged, and safe) method for samples preparations and validations was carried out in compliance with EU SANCO guidelines. The method was applied to 58 tomato samples. More than 84% of the compounds investigated showed limits of detection equal to or lower than 5 mg kg(-1). A mild (<20%), medium (20-50%), and strong (>50%) matrix effect was observed for 72%, 25%, and 3% of the pesticides studied, respectively. Eighty-one percent of the pesticides showed recoveries ranging between 70% and 120%. Twelve pesticides were detected in 35 samples, all below the maximum residue levels permitted in the Brazilian legislation; 15 samples exceeded the maximum residue levels established by the EU legislation for methamidophos; and 10 exceeded limits for acephate and four for bromuconazole. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. VLBI observations of the nucleus of Centaurus A

    NASA Technical Reports Server (NTRS)

    Preston, R. A.; Wehrle, A. E.; Morabito, D. D.; Jauncey, D. L.; Batty, M. J.; Haynes, R. F.; Wright, A. E.; Nicolson, G. D.

    1983-01-01

    VLBI observations of the nucleus of Centaurus A made at 2.3 GHz on baselines with minimum fringe spacings of 0.15 and 0.0027 arcsec are presented. Results show that the nuclear component is elongated with a maximum extent of approximately 0.05 arcsec which is equivalent to a size of approximately 1 pc at the 5 Mpc distance of Centaurus A. The position angle of the nucleus is found to be 30 + or - 20 degrees, while the ratio of nuclear jet length to width is less than or approximately equal to 20. The nuclear flux density is determined to be 6.8 Jy, while no core component is found with an extent less than or approximately equal to 0.001 (less than or approximately equal to 0.02 pc) with a flux density of greater than or approximately equal to 20 mJy. A model of the Centaurus A nucleus composed of at least two components is developed on the basis of these results in conjunction with earlier VLBI and spectral data. The first component is an elongated source of approximately 0.05 arcsec (approximately 1 pc) size which contains most of the 2.3 GHz nuclear flux, while the second component is a source of approximately 0.0005 arcsec (approximately 0.01 pc) size which is nearly completely self-absorbed at 2.3 GHz but strengthens at higher frequencies.

  4. Low density, resorcinol-formaldehyde aerogels

    DOEpatents

    Pekala, R.W.

    1988-05-26

    The polycondensation of resorcinol with formaldehyde under alkaline conditions results in the formation of surface functionalized polymer ''clusters''. The covalent crosslinking of these ''clusters'' produces gels which when processed under supercritical conditions, produce low density, organic aerogels (density less than or equal to100 mg/cc; cell size less than or equal to0.1 microns). The aerogels are transparent,dark red in color and consist of interconnected colloidal-like particles with diameters of about 100 A/degree/. These aerogels may be further carbonized to form low density carbon foams with cell size of about 0.1 micron. 1 fig., 1 tab.

  5. Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches

    NASA Technical Reports Server (NTRS)

    Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.

  6. Two-sample binary phase 2 trials with low type I error and low sample size

    PubMed Central

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A.

    2017-01-01

    Summary We address design of two-stage clinical trials comparing experimental and control patients. Our end-point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p0 and alternative that it is p0 among controls and p1 > p0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E ≥ m, with two-sample rules of the form E – C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. PMID:28118686

  7. Microstructure heterogeneity after the ECAP process and its influence on recrystallization in aluminium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wronski, S., E-mail: wronski@fis.agh.edu.pl; Tarasiuk, J., E-mail: tarasiuk@ftj.agh.edu.pl; Bacroix, B., E-mail: brigitte.bacroix@univ-paris13.fr

    The main purpose of the present work is to describe the qualitative and quantitative behaviours of aluminium during high strain plastic deformation and the effect of deformation on the subsequent recrystallization process. An Electron Backscatter Diffraction analysis of aluminium after the Equal channel angular pressing (ECAP) and recrystallization process is presented. In order to do this, several topological maps are measured for samples processed by 4 and 8 passes and recrystallized. The processing was conducted with route C. For all samples, distributions of grain size, misorientation, image quality factor (IQ) and texture were preceded and then analysed in some detail.more » - Highlights: ► Describe the microstructure fragmentation in aluminum. ► High strain plastic deformation and effect of deformation on recrystallization. ► The microstructure fragmentation and its influence on recrystallization. ► Image quality factor and misorientation characteristics are examined using EBSD.« less

  8. Influence of UFG structure formation on mechanical and fatigue properties in Ti-6Al-7Nb alloy

    NASA Astrophysics Data System (ADS)

    Polyakova, V. V.; Anumalasetty, V. N.; Semenova, I. P.; Valiev, R. Z.

    2014-08-01

    Ultrafine-grained (UFG) Ti alloys have potential applications in osteosynthesis and orthopedics due to high bio-compatibility and increased weight-to- strength ratio. In current study, Ti6Al7Nb ELI alloy is processed through equal channel angular pressing-conform (ECAP-Conform) and subsequent thermomechanical processing to generate a UFG microstructure. The fatigue properties of UFG alloys are compared to coarse grained (CG) alloys. Our study demonstrates that the UFG alloys with an average grain size of ~180 nm showed 35% enhancement of fatigue endurance limit as compared to coarse-grained alloys. On the fracture surfaces of the UFG and CG samples fatigue striations and dimpled relief were observed. However, the fracture surface of the UFG sample looks smoother; fewer amounts of secondary micro-cracks and more ductile rupture were also observed, which testifies to the good crack resistance in the UFG alloy after high-cyclic fatigue tests.

  9. Rare cancer cell analyzer for whole blood applications: automated nucleic acid purification in a microfluidic disposable card.

    PubMed

    Kokoris, M; Nabavi, M; Lancaster, C; Clemmens, J; Maloney, P; Capadanno, J; Gerdes, J; Battrell, C F

    2005-09-01

    One current challenge facing point-of-care cancer detection is that existing methods make it difficult, time consuming and too costly to (1) collect relevant cell types directly from a patient sample, such as blood and (2) rapidly assay those cell types to determine the presence or absence of a particular type of cancer. We present a proof of principle method for an integrated, sample-to-result, point-of-care detection device that employs microfluidics technology, accepted assays, and a silica membrane for total RNA purification on a disposable, credit card sized laboratory-on-card ('lab card") device in which results are obtained in minutes. Both yield and quality of on-card purified total RNA, as determined by both LightCycler and standard reverse transcriptase amplification of G6PDH and BCR-ABL transcripts, were found to be better than or equal to accepted standard purification methods.

  10. Investigation of luminescent properties of LaF3:Nd3+ nanoparticles

    NASA Astrophysics Data System (ADS)

    Wyrwas, Marek; Miluski, Piotr; Zmojda, Jacek; Kochanowicz, Marcin; Jelen, Piotr; Sitarz, Maciej; Dorosz, Dominik

    2015-09-01

    Lanthanum fluoride nanoparticles doped with Nd3+ ions obtained via solvothermal method have been presented. Doped nanoparticles were prepared in two-step method. Firstly rare-earth chlorides were synthesized from oxides and then they were used to prepare LaF3 particles. The luminescence spectra shows typical for crystalline materials Stark splitting at 880 nm corresponding 4F3/2 to 4I9/2 level transition and 1060 nm matching 4F3/2 to 4I11/2 level transition. The highest luminescence intensity was achieved for sample doped with 0.75% wt. of Nd3+, and the longest decay time for sample doped with 0.5% wt. which reached 328 μs. The XRD pattern analysis confirmed that obtained material consists of crystalline LaF3, the grain size was estimated from Sherrer's formula and equaled about 25nm.

  11. Commercial Contract Training, Navy Area VOTEC Support Center (AVSC) Guidelines

    DTIC Science & Technology

    1975-06-01

    either manual or power operated equipment including collators, folders, paper drills, stitchers and cutters, the student will process printed materials...Challenge, model JF or equal). d. Folding machine, size 17-I1/2 x 22-1/2" (Challenge heavy duty model 175 or equal). e. Stitcher , paper (Bostitch model 7

  12. Infrared dynamics of cold atoms on hot graphene membranes

    NASA Astrophysics Data System (ADS)

    Sengupta, Sanghita; Kotov, Valeri N.; Clougherty, Dennis P.

    2016-06-01

    We study the infrared dynamics of low-energy atoms interacting with a sample of suspended graphene at finite temperature. The dynamics exhibits severe infrared divergences order by order in perturbation theory as a result of the singular nature of low-energy flexural phonon emission. Our model can be viewed as a two-channel generalization of the independent boson model with asymmetric atom-phonon coupling. This allows us to take advantage of the exact nonperturbative solution of the independent boson model in the stronger channel while treating the weaker one perturbatively. In the low-energy limit, the exact solution can be viewed as a resummation (exponentiation) of the most divergent diagrams in the perturbative expansion. As a result of this procedure, we obtain the atom's Green function which we use to calculate the atom damping rate, a quantity equal to the quantum sticking rate. A characteristic feature of our results is that the Green's function retains a weak, infrared cutoff dependence that reflects the reduced dimensionality of the problem. As a consequence, we predict a measurable dependence of the sticking rate on graphene sample size. We provide detailed predictions for the sticking rate of atomic hydrogen as a function of temperature and sample size. The resummation yields an enhanced sticking rate relative to the conventional Fermi golden rule result (equivalent to the one-loop atom self-energy), as higher-order processes increase damping at finite temperature.

  13. 21 CFR 165.110 - Bottled water.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., when a composite of analytical units of equal volume from a sample is examined by the method described...)(A) Bottled water shall, when a composite of analytical units of equal volume from a sample is..., and Cosmetic Act, the Food and Drug Administration has determined that bottled water, when a composite...

  14. Investigation on wear and corrosion behavior of equal channel angular pressed aluminium 2014 alloy

    NASA Astrophysics Data System (ADS)

    Divya, S. P.; Yoganandan, G.; Balaraju, J. N.; Srinivasan, S. A.; Nagaraj, M.; Ravisankar, B.

    2018-02-01

    Aluminium 2014 alloy solutionized at 495°C, aged at 195°C was subjected to Equal Channel Angular Pressing (ECAP). Dry sliding wear tests were conducted using pin on disc tribometer system under nominal loads of 10N and 30N with constant speed 2m/s for 2000m in order to investigate their wear behavior after ECAP. The Co-efficient of friction and loss in volume were decreased after ECAP. The dominant wear mechanism observed was adhesion, delamination in addition to these wear mechanisms, oxidation and transfer of Fe from the counter surface to the Al 2014 pin were observed at higher loading condition. The corrosion behavior was evaluated by potentiodynamic polarization (PDP) and electrochemical impedance spectroscopy (EIS) in 3.5% NaCl solution. The results obtained from PDP showed higher corrosion potential and lower corrosion density after ECAP than base. Electrochemical impedance spectroscopy (EIS) showed higher charge transfer resistance after ECAP. Surface morphology showed decreased pit size and increased oxygen content in ECAP sample than base after PDP.

  15. Preparation and characterization of V/TiO{sub 2} nanocatalyst with magnetic nucleus of iron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feyzi, Mostafa; Rafiee, Hamid Reza, E-mail: rafieehr@yahoo.com; Ranjbar, Shahram

    2013-11-15

    Graphical abstract: - Highlights: • Fe-V/TiO{sub 2} nanocatalyst is prepared. • Combination of sol–gel and wetness impregnation methods. • Facile separation of catalyst from medium by magnet. - Abstract: A magnetic composite containing V/TiO{sub 2} was prepared by combination of sol–gel and wetness impregnation methods. The effects of synthesis temperature, different weight percents of Fe supported on TiO{sub 2}, vanadium loading and the heating rate of calcination on the structure and morphology of nanocatalyst were investigated. The optimum conditions for synthesized catalyst were 40 wt.% of Fe, 15 wt.% of V and synthesis temperature equal to 30 °C. Characterization ofmore » catalyst is carried out using XRD, TGA, DSC, SEM, FTIR and N{sub 2} physisorption measurements. The magnetic character of nanocatalyst was measured using VSM, which showed the typical paramagnetic behavior of sample at room temperature with a saturation magnetization value equal to 8.283 emu/g. The nanocatalyst has a particle size about 56 nm and can easily be separated from medium by a magnet.« less

  16. Microbiological water quality in a large irrigation system: El Valle del Yaqui, Sonora México.

    PubMed

    Gortáres-Moroyoqui, Pablo; Castro-Espinoza, L; Naranjo, Jaime E; Karpiscak, Martin M; Freitas, Robert J; Gerba, Charles P

    2011-01-01

    The primary objective of this study was to determine the microbial water quality of a large irrigation system and how this quality varies with respect to canal size, impact of near-by communities, and the travel distance from the source in the El Valle del Yaqui, Sonora, México. In this arid region, 220,000 hectares are irrigated with 80% of the irrigation water being supplied from an extensive irrigation system including three dams on the Yaqui River watershed. The stored water flows to the irrigated fields through two main canal systems (severing the upper and lower Yaqui Valley) and then through smaller lateral canals that deliver the water to the fields. A total of 146 irrigation water samples were collected from 52 sample sites during three sampling events. Not all sites could be accessed on each occasion. All of the samples contained coliform bacteria ranging from 1,140 to 68,670 MPN/100 mL with an arithmetic mean of 11,416. Ninety-eight percent of the samples contained less than 1,000 MPN/100 mL Escherichia coli, with an arithmetic mean of 291 MPN/100 mL. Coliphage were detected in less than 30% of the samples with an arithmetic average equal to 141 PFU/100 mL. Enteroviruses, Cryptosporidium oocysts, and Giardia cysts were also detected in the canal systems. No significant difference was found in the water quality due to canal system (upper or lower Yaqui Valley), canal-size (main vs. lateral), distance from source, and the vicinity of human habitation (presence of various villages and towns along the length of the canals). There was a significant decrease in coliforms (p < 0.011) and E. coli (< 0.022) concentrations as travel distance increased from the City of Obregón.

  17. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    NASA Astrophysics Data System (ADS)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.

  18. Photon theory hypothesis about photon tunneling microscope's subwavelength resolution

    NASA Astrophysics Data System (ADS)

    Zhu, Yanbin; Ma, Junfu

    1995-09-01

    The foundation for the invention of the photon scanning tunneling microscope (PSTM) are the near field scanning optical microscope, the optical fiber technique, the total internal reflection, high sensitive opto-electronic detecting technique and computer technique etc. Recent research results show the subwavelength resolution of 1 - 3 nm is obtained. How to explain the PSTM has got such high subwavelength resolution? What value is the PSTM's limiting of subwavelength resolution? For resolving these problems this paper presented a photon theory hypothesis about PSTM that is based on the following two basic laws: (1) Photon is not only a carrier bringing energy and optical information, but also is a particle occupied fixed space size. (2) When a photon happened reflection, refraction, scattering, etc., only changed its energy and optical information carried, its particle size doesn't change. g (DOT) pphoton equals constant. Using these two basic laws to PSTM, the `evanescent field' is practically a weak photon distribution field and the detecting fiber tip diameter is practically a `gate' which size controlled the photon numbers into fiber tip. Passing through some calculation and inference, the following three conclusions can be given: (1) Under the PSTM's detection system sensitivity is high enough, the diameter D of detecting fiber tip and the near field detecting distance Z are the two most important factors to decide the subwavelength resolution of PSTM. (2) The limiting of PSTM's resolution will be given upon the conditions of D equals pphoton and Z equals pphoton, where pphoton is one photon size. (2) The final resolution limit R of PSTM will be lim R equals pphoton, D yields pphoton, Z yields pphoton.

  19. Involuntary vs. voluntary hospital admission. A systematic literature review on outcome diversity.

    PubMed

    Kallert, Thomas W; Glöckner, Matthias; Schützwohl, Matthias

    2008-06-01

    This article systematically reviews the literature on the outcome of acute hospitalization for adult general psychiatric patients admitted involuntarily as compared to patients admitted voluntarily. Inclusion and exclusion criteria qualified 41 out of 3,227 references found in Medline and PSYNDEXplus literature searches for this review. The authors independently rated these articles on six pre-defined indicators of research quality, carried out statistical comparisons ex-post facto where not reported, and computed for each adequate result the effect size index d for the comparison of means, and the Phi- or contingency coefficient for cross-tabulated data. Methodological quality of the studies, coming mostly from North American and European countries, showed significant variation and was higher concerning service-related than clinical or subjective outcomes. Main deficits appeared in sample size estimation, lack of clear follow-up time-points, and the absence of standardized instruments used to assess clinical outcomes. Length of stay, readmission risk, and risk of involuntary readmission were at least equal or greater for involuntary patients. Involuntary patients showed no increased mortality, but did have higher suicide rates than voluntary patients. Further, involuntary patients demonstrated lower levels of social functioning, and equal levels of general psychopathology and treatment compliance; they were more dissatisfied with treatment and more frequently felt that hospitalization was not justified. Future methodologically-sound studies exploring this topic should focus on patient populations not represented here. Further research should also clarify if the legal admission status is sufficiently valid for differentiating the outcome of acute hospitalization.

  20. Helium accumulation and bubble formation in FeCoNiCr alloy under high fluence He+ implantation

    NASA Astrophysics Data System (ADS)

    Chen, Da; Tong, Y.; Li, H.; Wang, J.; Zhao, Y. L.; Hu, Alice; Kai, J. J.

    2018-04-01

    Face-centered cubic (FCC) high-entropy alloys (HEA), as emerging alloys with equal-molar or near equal-molar constituents, show a promising radiation damage resistance under heavy ion bombardment, making them potential for structural material application in next-generation nuclear reactors, but the accumulation of light helium ions, a product of nuclear fission reaction, has not been studied. The present work experimentally studied the helium accumulation and bubble formation at implantation temperatures of 523 K, 573 K and 673 K in a homogenized FCC FeCoNiCr HEA, a HEA showing excellent radiation damage resistance under heavy ion irradiation. The size and population density of helium bubbles in FeCoNiCr samples were quantitatively analyzed through transmission electron microscopy (TEM), and the helium content existing in bubbles were estimated from a high-pressure Equation of State (EOS). We found that the helium diffusion in such condition was dominated by the self-interstitial/He replacement mechanism, and the corresponding activation energy in FeCoNiCr is comparable with the vacancy migration energy in Ni and austenitic stainless steel but only 14.3%, 31.4% and 51.4% of the accumulated helium precipitated into helium bubbles at 523 K, 573 K and 673 K, respectively, smaller than the pure Ni case. Importantly, the small bubble size suggested that FeCoNiCr HEA has a high resistance of helium bubble formation compared with Ni and steels.

  1. 46 CFR 76.15-5 - Quantity, pipe sizes, and discharge rate.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 3 2011-10-01 2011-10-01 false Quantity, pipe sizes, and discharge rate. 76.15-5... PROTECTION EQUIPMENT Carbon Dioxide Extinguishing Systems, Details § 76.15-5 Quantity, pipe sizes, and... dioxide required for each space in cubic feet shall be equal to the gross volume of the space in cubic...

  2. Determination of Slake Durability Index (Sdi) Values on Different Shape of Laminated Marl Samples

    NASA Astrophysics Data System (ADS)

    Ankara, Hüseyin; Çiçek, Fatma; Talha Deniz, İsmail; Uçak, Emre; Yerel Kandemir, Süheyla

    2016-10-01

    The slake durability index (SDI) test is widely used to determine the disintegration characteristic of the weak and clay-bearing rocks in geo-engineering problems. However, due to the different shapes of sample pieces, such as, irregular shapes displayed mechanical breakages in the slaking process, the SDI test has some limitations that affect the index values. In addition, shape and surface roughness of laminated marl samples have a severe influence on the SDI. In this study, a new sample preparation method called Pasha Method was used to prepare spherical specimens from the laminated marl collected from Seyitomer collar (SLI). Moreover the SDI tests were performed on equal size and weight specimens: three sets with different shapes were used. The three different sets were prepared as the test samples which had sphere shape, parallel to the layers in irregular shape, and vertical to the layers in irregular shape. Index values were determined for the three different sets subjected to the SDI test for 4 cycles. The index values at the end of fourth cycle were found to be 98.43, 98.39 and 97.20 %, respectively. As seen, the index values of the sphere sample set were found to be higher than irregular sample sets.

  3. A Nonparametric K-Sample Test for Equality of Slopes.

    ERIC Educational Resources Information Center

    Penfield, Douglas A.; Koffler, Stephen L.

    1986-01-01

    The development of a nonparametric K-sample test for equality of slopes using Puri's generalized L statistic is presented. The test is recommended when the assumptions underlying the parametric model are violated. This procedure replaces original data with either ranks (for data with heavy tails) or normal scores (for data with light tails).…

  4. THE MASSIVE SATELLITE POPULATION OF MILKY-WAY-SIZED GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez-Puebla, Aldo; Avila-Reese, Vladimir; Drory, Niv, E-mail: apuebla@astro.unam.mx

    2013-08-20

    Several occupational distributions for satellite galaxies more massive than m{sub *} Almost-Equal-To 4 Multiplication-Sign 10{sup 7} M{sub Sun} around Milky-Way (MW)-sized hosts are presented and used to predict the internal dynamics of these satellites as a function of m{sub *}. For the analysis, a large galaxy group mock catalog is constructed on the basis of (sub)halo-to-stellar mass relations fully constrained with currently available observations, namely the galaxy stellar mass function decomposed into centrals and satellites, and the two-point correlation functions at different masses. We find that 6.6% of MW-sized galaxies host two satellites in the mass range of the Smallmore » and Large Magellanic Clouds (SMC and LMC, respectively). The probabilities of the MW-sized galaxies having one satellite equal to or larger than the LMC, two satellites equal to or larger than the SMC, or three satellites equal to or larger than Sagittarius (Sgr) are Almost-Equal-To 0.26, 0.14, and 0.14, respectively. The cumulative satellite mass function of the MW, N{sub s} ({>=}m{sub *}) , down to the mass of the Fornax dwarf is within the 1{sigma} distribution of all the MW-sized galaxies. We find that MW-sized hosts with three satellites more massive than Sgr (as the MW) are among the most common cases. However, the most and second most massive satellites in these systems are smaller than the LMC and SMC by roughly 0.7 and 0.8 dex, respectively. We conclude that the distribution N{sub s} ({>=}m{sub *}) for MW-sized galaxies is quite broad, the particular case of the MW being of low frequency but not an outlier. The halo mass of MW-sized galaxies correlates only weakly with N{sub s} ({>=}m{sub *}). Then, it is not possible to accurately determine the MW halo mass by means of its N{sub s} ({>=}m{sub *}); from our catalog, we constrain a lower limit of 1.38 Multiplication-Sign 10{sup 12} M{sub Sun} at the 1{sigma} level. Our analysis strongly suggests that the abundance of massive subhalos should agree with the abundance of massive satellites in all MW-sized hosts, i.e., there is not a missing (massive) satellite problem for the {Lambda}CDM cosmology. However, we confirm that the maximum circular velocity, v{sub max}, of the subhalos of satellites smaller than m{sub *} {approx} 10{sup 8} M{sub Sun} is systematically larger than the v{sub max} inferred from current observational studies of the MW bright dwarf satellites; different from previous works, this conclusion is based on an analysis of the overall population of MW-sized galaxies. Some pieces of evidence suggest that the issue could refer only to satellite dwarfs but not to central dwarfs, then environmental processes associated with dwarfs inside host halos combined with supernova-driven core expansion should be on the basis of the lowering of v{sub max}.« less

  5. [A novel protein equalizer based on single chain variable fragment display M13 phage library for nephropathy patient urine study].

    PubMed

    Zhao, Peng; Tao, Dingyin; Liang, Zhen; Zhang, Lihua; Zhang, Yukui

    2009-05-01

    A novel protein equalizer was developed with single chain variable fragment (scFv) library displaying M13 phage covalently bonded on monolithic cryogel. Due to the great number and various kinds of displayed scFv fragments, as well as strong and specific binding capacity between scFv fragments and proteins, a new protein equalizer technology is preferable in the pretreatment of complex protein samples. After the sample dissolved in phosphate buffer solution (PBS), it was repeatedly loaded onto the equalizer for five times, the bound proteins were in sequence eluted by 2 mol/L NaCl and 50 mmol/L Gly-HC1 (pH 2.5) solution, followed by digestion with thrombin. All proteins or peptides collected from each fraction were further analyzed by high performance liquid chromatography-electrospray tandem mass spectrometry (RPLC-ESI-MS/MS) with a serially coupled long microcolumn. Compared with the untreated samples, the identified protein number was increased from 142 to 396. Furthermore, from sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) analysis results, it was found that the protein concentration difference was reduced obviously in the eluant of direct sample loading, and most high abundance proteins were identified in the eluant of NaCl. All these results demonstrate that the novel protein equalizer with scFv display M13 phage library immobilized on cyrogel could effectively reduce the dynamic range of proteins in complex samples, enabling the identification of more low abundance proteins.

  6. Work and heat fluctuations in two-state systems: a trajectory thermodynamics formalism

    NASA Astrophysics Data System (ADS)

    Ritort, F.

    2004-10-01

    Two-state models provide phenomenological descriptions of many different systems, ranging from physics to chemistry and biology. We investigate work fluctuations in an ensemble of two-state systems driven out of equilibrium under the action of an external perturbation. We calculate the probability density PN(W) that work equal to W is exerted upon the system (of size N) along a given non-equilibrium trajectory and introduce a trajectory thermodynamics formalism to quantify work fluctuations in the large-N limit. We then define a trajectory entropy SN(W) that counts the number of non-equilibrium trajectories PN(W) = exp(SN(W)/kBT) with work equal to W and characterizes fluctuations of work trajectories around the most probable value Wmp. A trajectory free energy {\\cal F}_N(W) can also be defined, which has a minimum at W = W†, this being the value of the work that has to be efficiently sampled to quantitatively test the Jarzynski equality. Within this formalism a Lagrange multiplier is also introduced, the inverse of which plays the role of a trajectory temperature. Our general solution for PN(W) exactly satisfies the fluctuation theorem by Crooks and allows us to investigate heat fluctuations for a protocol that is invariant under time reversal. The heat distribution is then characterized by a Gaussian component (describing small and frequent heat exchange events) and exponential tails (describing the statistics of large deviations and rare events). For the latter, the width of the exponential tails is related to the aforementioned trajectory temperature. Finite-size effects to the large-N theory and the recovery of work distributions for finite N are also discussed. Finally, we pay particular attention to the case of magnetic nanoparticle systems under the action of a magnetic field H where work and heat fluctuations are predicted to be observable in ramping experiments in micro-SQUIDs.

  7. Au nanostructure arrays for plasmonic applications: annealed island films versus nanoimprint lithography

    NASA Astrophysics Data System (ADS)

    Lopatynskyi, Andrii M.; Lytvyn, Vitalii K.; Nazarenko, Volodymyr I.; Guo, L. Jay; Lucas, Brandon D.; Chegel, Volodymyr I.

    2015-03-01

    This paper attempts to compare the main features of random and highly ordered gold nanostructure arrays (NSA) prepared by thermally annealed island film and nanoimprint lithography (NIL) techniques, respectively. Each substrate possesses different morphology in terms of plasmonic enhancement. Both methods allow such important features as spectral tuning of plasmon resonance position depending on size and shape of nanostructures; however, the time and cost is quite different. The respective comparison was performed experimentally and theoretically for a number of samples with different geometrical parameters. Spectral characteristics of fabricated NSA exhibited an expressed plasmon peak in the range from 576 to 809 nm for thermally annealed samples and from 606 to 783 nm for samples prepared by NIL. Modelling of the optical response for nanostructures with typical shapes associated with these techniques (parallelepiped for NIL and semi-ellipsoid for annealed island films) was performed using finite-difference time-domain calculations. Mathematical simulations have indicated the dependence of electric field enhancement on the shape and size of the nanoparticles. As an important point, the distribution of electric field at so-called `hot spots' was considered. Parallelepiped-shaped nanoparticles were shown to yield maximal enhancement values by an order of magnitude greater than their semi-ellipsoid-shaped counterparts; however, both nanoparticle shapes have demonstrated comparable effective electrical field enhancement values. Optimized Au nanostructures with equivalent diameters ranging from 85 to 143 nm and height equal to 35 nm were obtained for both techniques, resulting in the largest electrical field enhancement. The application of island film thermal annealing method for nanochips fabrication can be considered as a possible cost-effective platform for various surface-enhanced spectroscopies; while the NIL-fabricated NSA looks like more effective for sensing of small-size objects.

  8. Laboratory-based observations of capillary barriers and preferential flow in layered snow

    NASA Astrophysics Data System (ADS)

    Avanzi, F.; Hirashima, H.; Yamaguchi, S.; Katsushima, T.; De Michele, C.

    2015-12-01

    Several evidences are nowadays available that show how the effects of capillary gradients and preferential flow on water transmission in snow may play a more important role than expected. To observe these processes and to contribute in their characterization, we performed observations on the development of capillary barriers and preferential flow patterns in layered snow during cold laboratory experiments. We considered three different layering (all characterized by a finer-over-coarser texture in grain size) and three different water input rates. Nine samples of layered snow were sieved in a cold laboratory, and subjected to a constant supply of dyed tracer. By means of visual inspection, horizontal sectioning and liquid water content measurements, the processes of ponding and preferential flow were characterized as a function of texture and water input rate. The dynamics of each sample were replicated using the multi-layer physically-based SNOWPACK model. Results show that capillary barriers and preferential flow are relevant processes ruling the speed of liquid water in stratified snow. Ponding is associated with peaks in LWC at the boundary between the two layers equal to ~ 33-36 vol. % when the upper layer is composed by fine snow (grain size smaller than 0.5 mm). The thickness of the ponding layer at the textural boundary is between 0 and 3 cm, depending on sample stratigraphy. Heterogeneity in water transmission increases with grain size, while we do not observe any clear dependency on water input rate. The extensive comparison between observed and simulated LWC profiles by SNOWPACK (using an approximation of Richards Equation) shows high performances by the model in estimating the LWC peak over the boundary, while water speed in snow is underestimated by the chosen water transport scheme.

  9. The filter-feeding ciliates Colpidium striatum and Tetrahymena pyriformis display selective feeding behaviours in the presence of mixed, equally-sized, bacterial prey.

    PubMed

    Thurman, Jill; Parry, Jacqueline D; Hill, Philip J; Laybourn-Parry, Johanna

    2010-10-01

    This study examined whether two ciliates could discriminate between equally-sized bacterial prey in mixture and if so, how selectivity might benefit the ciliate population. Live Klebsiella aerogenes, K. ozaenae and Escherichia coli, expressing different coloured fluorescent proteins, were cultured in such a way as to provide populations containing equally-sized cells (to prevent size-selective grazing taking place) and these prey were fed to each ciliate in 50:50 mixtures. Colpidium striatum selected K. aerogenes over K. ozaenae which itself was selected over E. coli. Tetrahymena pyriformis showed no selectivity between K. aerogenes and E. coli but K. aerogenes was selected over K. ozaenae while E. coli was not. This apparent selection of K. aerogenes over K. ozaenae was sustained in ciliate populations with different feeding histories and when K. aerogenes comprised only 20% of the prey mixture, suggesting possible optimal foraging behaviour. The metabolic benefits for selecting K. aerogenes were identified as possibly being an increase in cell biovolume and yield for C. striatum and T. pyriformis, respectively. The mechanism by which these ciliates selected specific bacterial cells in mixture is currently unknown but the use of live fluorescent bacteria, in prey mixtures, offers an exciting avenue for further investigation of selective feeding by protozoa. Copyright 2010 Elsevier Ltd. All rights reserved.

  10. Size exclusion chromatography with superficially porous particles.

    PubMed

    Schure, Mark R; Moran, Robert E

    2017-01-13

    A comparison is made using size-exclusion chromatography (SEC) of synthetic polymers between fully porous particles (FPPs) and superficially porous particles (SPPs) with similar particle diameters, pore sizes and equal flow rates. Polystyrene molecular weight standards with a mobile phase of tetrahydrofuran are utilized for all measurements conducted with standard HPLC equipment. Although it is traditionally thought that larger pore volume is thermodynamically advantageous in SEC for better separations, SPPs have kinetic advantages and these will be shown to compensate for the loss in pore volume compared to FPPs. The comparison metrics include the elution range (smaller with SPPs), the plate count (larger for SPPs), the rate production of theoretical plates (larger for SPPs) and the specific resolution (larger with FPPs). Advantages to using SPPs for SEC are discussed such that similar separations can be conducted faster using SPPs. SEC using SPPs offers similar peak capacities to that using FPPs but with faster operation. This also suggests that SEC conducted in the second dimension of a two-dimensional liquid chromatograph may benefit with reduced run time and with equivalently reduced peak width making SPPs advantageous for sampling the first dimension by the second dimension separator. Additional advantages are discussed for biomolecules along with a discussion of optimization criteria for size-based separations. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Second look at the spread of epidemics on networks

    NASA Astrophysics Data System (ADS)

    Kenah, Eben; Robins, James M.

    2007-09-01

    In an important paper, Newman [Phys. Rev. E66, 016128 (2002)] claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semidirected random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In the Appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.

  12. SU-G-TeP3-14: Three-Dimensional Cluster Model in Inhomogeneous Dose Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, J; Penagaricano, J; Narayanasamy, G

    2016-06-15

    Purpose: We aim to investigate 3D cluster formation in inhomogeneous dose distribution to search for new models predicting radiation tissue damage and further leading to new optimization paradigm for radiotherapy planning. Methods: The aggregation of higher dose in the organ at risk (OAR) than a preset threshold was chosen as the cluster whose connectivity dictates the cluster structure. Upon the selection of the dose threshold, the fractional density defined as the fraction of voxels in the organ eligible to be part of the cluster was determined according to the dose volume histogram (DVH). A Monte Carlo method was implemented tomore » establish a case pertinent to the corresponding DVH. Ones and zeros were randomly assigned to each OAR voxel with the sampling probability equal to the fractional density. Ten thousand samples were randomly generated to ensure a sufficient number of cluster sets. A recursive cluster searching algorithm was developed to analyze the cluster with various connectivity choices like 1-, 2-, and 3-connectivity. The mean size of the largest cluster (MSLC) from the Monte Carlo samples was taken to be a function of the fractional density. Various OARs from clinical plans were included in the study. Results: Intensive Monte Carlo study demonstrates the inverse relationship between the MSLC and the cluster connectivity as anticipated and the cluster size does not change with fractional density linearly regardless of the connectivity types. An initially-slow-increase to exponential growth transition of the MSLC from low to high density was observed. The cluster sizes were found to vary within a large range and are relatively independent of the OARs. Conclusion: The Monte Carlo study revealed that the cluster size could serve as a suitable index of the tissue damage (percolation cluster) and the clinical outcome of the same DVH might be potentially different.« less

  13. Body size of young Australians aged five to 16 years.

    PubMed

    Hitchcock, N E; Maller, R A; Gilmour, A I

    1986-10-20

    In 1983-1984, 4578 Perth primary and secondary schoolchildren were studied. The selected sample was broadly representative of the ethnic groups that make up the Perth population and of the different social ranks within that population. The age, sex, weight, height, country of birth of the child and the parents, and occupation of the father were recorded for each subject. Weight, height and body mass index (BMI) increased with age. Age and sex were the most important determinants of body size. However, children of lower social rank and those with a southern European background were over-represented among the overweight children (greater than the 90th centile for BMI), particularly in adolescence. Children with an Asian background who were 11 years of age and younger were over-represented among the underweight children (less than or equal to the 10th centile for BMI). Results from this study indicate a continuing, though small (1.5 cm to 1.6 cm), secular increase in height over the past 13 to 14 years.

  14. Livestock production in central Mali: ownership, management and productivity of poultry in the traditional sector.

    PubMed

    Kuit, H G; Traore, A; Wilson, R T

    1986-11-01

    A survey of small-scale poultry production in an urban and two agropastoral systems covered 381 households. Less detailed information was also obtained from a small sample in a transhumant pastoral system. More households owned poultry in the rice (89.5%) than in the millet (81.1%) or urban (57.1%) systems. Domestic fowl were commonest in all systems followed by pigeons, Guinea fowl and then ducks, although the last were absent from the millet zone. Most families kept only one species but there was more diversification in the rice system. Flock sizes were largest in the rice system for fowls, Guinea fowl and pigeons while duck flocks averaged more birds in the urban area. Females predominated in all species except pigeons where sex ratios were about equal. Management practices in relation to housing, feeding, health care and consumption and marketing are described. Productivity figures relating to egg production, egg size, hatchability, growth and mortality are provided.

  15. The species-area relationship, self-similarity, and the true meaning of the z-value.

    PubMed

    Tjørve, Even; Tjørve, Kathleen M Calf

    2008-12-01

    The power model, S= cA(z) (where S is number of species, A is area, and c and z are fitted constants), is the model most commonly fitted to species-area data assessing species diversity. We use the self-similarity properties of this model to reveal patterns implicated by the z parameter. We present the basic arithmetic leading both to the fraction of new species added when two areas are combined and to species overlap between two areas of the same size, given a continuous sampling scheme. The fraction of new species resulting from expansion of an area can be expressed as alpha(z)-1, where alpha is the expansion factor. Consequently, z-values can be converted to a scale-invariant species overlap between two equally sized areas, since the proportion of species in common between the two areas is 2-2(z). Calculating overlap when adding areas of the same size reveals the intrinsic effect of distance assumed by the bisectional scheme. We use overlap area relationships from empirical data sets to illustrate how answers to the single large or several small reserves (SLOSS) question vary between data sets and with scale. We conclude that species overlap and the effect of distance between sample areas or isolates should be addressed when discussing species area relationships, and lack of fit to the power model can be caused by its assumption of a scale-invariant overlap relationship.

  16. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  17. 1.9-um diode-laser-assisted anastomoses in reconstructive microsurgery: preliminary results in 12 patients

    NASA Astrophysics Data System (ADS)

    Mordon, Serge R.; Schoffs, Michel; Martinot, Veronique L.; Buys, Bruno; Patenotre, Philippe; Lesage, Jean C.; Dhelin, Guy

    1998-01-01

    The authors reported an original 1.9 micrometer diode laser assisted microvascular anastomosis (LAMA) in human. This technique has been applied in 12 patients during reconstructive surgery for digital replantations (n equals 2), for digital revascularizations (n equals 3) and for free flap transfers (n equals 7). Fourteen end-to-end anastomoses (10 arteries, 4 veins) were performed. LAMA were always performed on vessel which did not impede the chance of success of the surgical procedure in case of thrombosis. LAMA was performed with a 1.9 micrometer diode laser after placement of 2 equidistant stitches. The didoes spot was obtained by means of an optic fiber transmitted to the vessel wall via a pencil size hand piece. The used parameters were as followed: spot size equals 400 micrometer, power equals 70 to 220 mW, time equals 0.7 to 2 seconds, mean fluence equals 115 J/cm2. The mechanism involved is a thermal effect on the collagen of the adventitia and media leading to a phenomena which the authors have termed 'heliofusion.' This preliminary trial has permitted to define the modalities of its use in human. The technique is simple, rapid and easily learned. The equipment is not cumbersome, sterilizable and very ergonomic. LAMA does not replace sutures but is complementary, thanks to a reduction in the number of stitches used and to an access to surgical areas which are not easily accessible. This study must be completed by a larger scale study to confirm this technique and its reliability. Others uses could performed on different tissues such as biliary and urinary track, specially under laparoscopic conditions.

  18. Psychosocial interventions for children and adolescents after man-made and natural disasters: a meta-analysis and systematic review.

    PubMed

    Brown, R C; Witt, A; Fegert, J M; Keller, F; Rassenhofer, M; Plener, P L

    2017-08-01

    Children and adolescents are a vulnerable group to develop post-traumatic stress symptoms after natural or man-made disasters. In the light of increasing numbers of refugees under the age of 18 years worldwide, there is a significant need for effective treatments. This meta-analytic review investigates specific psychosocial treatments for children and adolescents after man-made and natural disasters. In a systematic literature search using MEDLINE, EMBASE and PsycINFO, as well as hand-searching existing reviews and contacting professional associations, 36 studies were identified. Random- and mixed-effects models were applied to test for average effect sizes and moderating variables. Overall, treatments showed high effect sizes in pre-post comparisons (Hedges' g = 1.34) and medium effect sizes as compared with control conditions (Hedges' g = 0.43). Treatments investigated by at least two studies were cognitive-behavioural therapy (CBT), eye movement desensitization and reprocessing (EMDR), narrative exposure therapy for children (KIDNET) and classroom-based interventions, which showed similar effect sizes. However, studies were very heterogenic with regard to their outcomes. Effects were moderated by type of profession (higher level of training leading to higher effect sizes). A number of effective psychosocial treatments for child and adolescent survivors of disasters exist. CBT, EMDR, KIDNET and classroom-based interventions can be equally recommended. Although disasters require immediate reactions and improvisation, future studies with larger sample sizes and rigorous methodology are needed.

  19. 21 CFR 165.110 - Bottled water.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    .... (3) Physical quality. Bottled water shall, when a composite of analytical units of equal volume from.... 1 (4) Chemical quality. (i)(A) Bottled water shall, when a composite of analytical units of equal... bottled water, when a composite of analytical units of equal volume from a sample is examined by the...

  20. Appropriate Statistical Analysis for Two Independent Groups of Likert-Type Data

    ERIC Educational Resources Information Center

    Warachan, Boonyasit

    2011-01-01

    The objective of this research was to determine the robustness and statistical power of three different methods for testing the hypothesis that ordinal samples of five and seven Likert categories come from equal populations. The three methods are the two sample t-test with equal variances, the Mann-Whitney test, and the Kolmogorov-Smirnov test. In…

  1. Regional haze case studies in the southwestern U.S—I. Aerosol chemical composition

    NASA Astrophysics Data System (ADS)

    Macias, Edward S.; Zwicker, Judith O.; Ouimette, James R.; Hering, Susanne V.; Friedlander, Sheldon K.; Cahill, Thomas A.; Kuhlmey, Gregory A.; Richards, L. Willard

    Aerosol chemical composition as a function of particle size was determined in the southwestern U.S.A. during four weeks of sampling in June, July and December, 1979 as a part of project VISITA. Samples were collected at two ground stations about 80 km apart near Page (AZ) and in two aircraft flying throughout the region. Several different size separating aerosol samplers and chemical analysis procedures were intercompared and were used in determining the size distribution and elemental composition of the aerosol. Sulfur was shown to be in the form of water soluable sulfate, highly correlated with ammonium ion, and with an average [NH +4]/[SO 2-4] molar ratio of 1.65. During the summer sampling period, three distinct regimes were observed, each with a different aerosol composition. The first, 24 h sampling ending 30 June, was characterized by a higher than average value of light scattering due to particles (b sp) of 24 × 10 -6m-1 and a fine particulate mass ( Mf) of 8.5 μg m -1. The fine particle aerosol was dominated by sulfate and carbon. Aircraft measurements showed the aerosol was homogeneous throughout the region at that time. The second regime, 5 July, had the highest average bsp of 51 × 10 -6m -1 during the sampling period with Mf of 3.2 μgm -3. The fine particle aerosol had nearly equal concentrations of carbon and ammonium sulfate. For all three regimes, enrichment factor analysis indicated fine and coarse particle Cu, Zn, Cl, Br, and Pb and fine particle K were enriched above crustal concentrations relative to Fe, indicating that these elements were present in the aerosol from sources other than wind blown dust. Particle extinction budgets calculated for the three regimes indicated that fine particles contributed most significantly, with carbon and (NH 4) 2SO 4 making the largest contributions. Fine particle crustal elements including Si did not contribute significantly to the extinction budget during this study. The December sampling was characterized by very light fine particle loading with two regimes identified. One regime had higher fine mass and sulfate concentrations while the other had low values for all species measured.

  2. Equal-mobility bed load transport in a small, step-pool channel in the Ouachita Mountains

    Treesearch

    Daniel A. Marion; Frank Weirich

    2003-01-01

    Abstract: Equal-mobility transport (EMT) of bed load is more evident than size-selective transport during near-bankfull flow events in a small, step-pool channel in the Ouachita Mountains of central Arkansas. Bed load transport modes were studied by simulating five separate runoff events with peak discharges between 0.25 and 1.34 m3...

  3. Interactive video gaming compared with health education in older adults with mild cognitive impairment: a feasibility study.

    PubMed

    Hughes, Tiffany F; Flatt, Jason D; Fu, Bo; Butters, Meryl A; Chang, Chung-Chou H; Ganguli, Mary

    2014-09-01

    We evaluated the feasibility of a trial of Wii interactive video gaming, and its potential efficacy at improving cognitive functioning compared with health education, in a community sample of older adults with neuropsychologically defined mild cognitive impairment. Twenty older adults were equally randomized to either group-based interactive video gaming or health education for 90 min each week for 24 weeks. Although the primary outcomes were related to study feasibility, we also explored the effect of the intervention on neuropsychological performance and other secondary outcomes. All 20 participants completed the intervention, and 18 attended at least 80% of the sessions. The majority (80%) of participants were "very much" satisfied with the intervention. Bowling was enjoyed by the most participants and was also rated the highest among the games for mental, social, and physical stimulation. We observed medium effect sizes for cognitive and physical functioning in favor of the interactive video gaming condition, but these effects were not statistically significant in this small sample. Interactive video gaming is feasible for older adults with mild cognitive impairment, and medium effect sizes in favor of the Wii group warrant a larger efficacy trial. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Comparison between Measured and Calculated Sediment Transport Rates in North Fork Caspar Creek, California

    NASA Astrophysics Data System (ADS)

    Kim, T. W.; Yarnell, S. M.; Yager, E.; Leidman, S. Z.

    2015-12-01

    Caspar Creek is a gravel-bedded stream located in the Jackson Demonstration State Forest in the coast range of California. The Caspar Creek Experimental Watershed has been actively monitored and studied by the Pacific Southwest Research Station and California Department of Forestry and Fire Protection for over five decades. Although total annual sediment yield has been monitored through time, sediment transport during individual storm events is less certain. At a study site on North Fork Caspar Creek, cross-section averaged sediment flux was collected throughout two storm events in December 2014 and February 2015 to determine if two commonly used sediment transport equations—Meyer-Peter-Müller and Wilcock—approximated observed bedload transport. Cross-section averaged bedload samples were collected approximately every hour during each storm event using a Helley-Smith bedload sampler. Five-minute composite samples were collected at five equally spaced locations along a cross-section and then sieved to half-phi sizes to determine the grain size distribution. The measured sediment flux values varied widely throughout the storm hydrographs and were consistently less than two orders of magnitude in value in comparison to the calculated values. Armored bed conditions, changing hydraulic conditions during each storm and variable sediment supply may have contributed to the observed differences.

  5. Synthesis, characterization and field evaluation of a new calcium-based CO2 absorbent for radial diffusive sampler

    NASA Astrophysics Data System (ADS)

    Cucciniello, Raffaele; Proto, Antonio; Alfano, Davide; Motta, Oriana

    2012-12-01

    In this paper the use of passive sampling as a powerful approach to monitor atmospheric CO2 is assessed. Suitable substrate based on calcium-aluminium oxide was synthetized according to a process which permits to control the particle size of the CaO/Al based sorbent. The study shows that hydration of substrate is an essential part of the process of CO2 absorption and subsequent conversion to carbonate. X-ray diffraction, thermogravimetric analysis, environmental scanning electron microscopic analysis were used in order to characterize the substrate and to establish the best performances both in terms of particle size and CO2 absorption capacity. Passive samplers for CO2 monitoring were prepared and then tested at laboratory level and in the atmospheric environment. Validation was performed by comparison with an infrared continuous detector. Thermogravimetric analysis results, carried out to evaluate the absorbing capability of this new passive device, were in accordance with data collected at the same time by the active continuous analyser. The diffusive sampling rate and the diffusion coefficient of CO2 respect to this new passive device were also evaluated resulting equal to 47 ± 3 ml min-1 and 0.0509 ± 0.005 cm2 s-1, respectively.

  6. Effect of membrane filtration artifacts on dissolved trace element concentrations

    USGS Publications Warehouse

    Horowitz, Arthur J.; Elrick, Kent A.; Colberg, Mark R.

    1992-01-01

    Among environment scientists, the current and almost universally accepted definition of dissolved constituents is an operational one; only those materials which pass through a 0.45-??m membrane filter are considered to be dissolved. Detailed laboratory and field studies on Fe and Al indicate that a number of factors associated with filtration, other than just pore size, can substantially alter 'dissolved' trace element concentrations; these include: filter type, filter diameter, filtration method, volume of sample processed, suspended sediment concentration, suspended sediment grain-size distribution, concentration of colloids and colloidally associated trace elements and concentration of organic matter. As such, reported filtered-water concentrations employing the same pore size filter may not be equal. Filtration artifacts may lead to the production of chemical data that indicate seasonal or annual 'dissolved' chemical trends which do not reflect actual environmental conditions. Further, the development of worldwide averages for various dissolved chemical constituents, the quantification of geochemical cycles, and the determination of short- or long-term environmental chemical trends may be subject to substantial errors, due to filtration artifacts, when data from the same or multiple sources are combined. Finally, filtration effects could have a substantial impact on various regulatory requirements.

  7. The effect of membrane filtration artifacts on dissolved trace element concentrations

    USGS Publications Warehouse

    Horowitz, A.J.; Elrick, K.A.; Colberg, M.R.

    1992-01-01

    Among environment scientists, the current and almost universally accepted definition of dissolved constituents is an operational one only those materials which pass through a 0.45-??m membrane filter are considered to be dissolved. Detailed laboratory and field studies on Fe and Al indicate that a number of factors associated with filtration, other than just pore size, can substantially alter 'dissolved' trace element concentrations; these include: filter type, filter diameter, filtration method, volume of sample processed, suspended sediment concentration, suspended sediment grain-size distribution, concentration of colloids and colloidally-associated trace elements and concentration of organic matter. As such, reported filtered-water concentrations employing the same pore size filter may not be equal. Filtration artifacts may lead to the production of chemical data that indicate seasonal or annual 'dissolved' chemical trends which do not reflect actual environmental conditions. Further, the development of worldwide averages for various dissolved chemical constituents, the quantification of geochemical cycles, and the determination of short- or long-term environmental chemical trends may be subject to substantial errors, due to filtration artifacts, when data from the same or multiple sources are combined. Finally, filtration effects could have a substantial impact on various regulatory requirements.

  8. Probabilistic maturation reaction norms assessed from mark–recaptures of wild fish in their natural habitat

    PubMed Central

    Olsen, Esben M; Serbezov, Dimitar; Vøllestad, Leif A

    2014-01-01

    Reaction norms are a valuable tool in evolutionary biology. Lately, the probabilistic maturation reaction norm approach, describing probabilities of maturing at combinations of age and body size, has been much applied for testing whether phenotypic changes in exploited populations of fish are mainly plastic or involving an evolutionary component. However, due to typical field data limitations, with imperfect knowledge about individual life histories, this demographic method still needs to be assessed. Using 13 years of direct mark–recapture observations on individual growth and maturation in an intensively sampled population of brown trout (Salmo trutta), we show that the probabilistic maturation reaction norm approach may perform well even if the assumption of equal survival of juvenile and maturing fish does not hold. Earlier studies have pointed out that growth effects may confound the interpretation of shifts in maturation reaction norms, because this method in its basic form deals with body size rather than growth. In our case, however, we found that juvenile body size, rather than annual growth, was more strongly associated with maturation. Viewed against earlier studies, our results also underscore the challenges of generalizing life-history patterns among species and populations. PMID:24967078

  9. Feeding and reproductive patterns of Astyanax intermedius in a headwater stream of Atlantic Rainforest.

    PubMed

    Souza, Ursulla P; Ferreira, Fabio C; Carmo, Michele A F; Braga, Francisco M S

    2015-01-01

    In this paper, we determined diet composition, reproductive periodicity and fecundity of Astyanax intermedius in a headwater stream of a State Park of an Atlantic rainforest. We also evaluated the influence of rainfall, water temperature and fish size on niche width and niche overlap. Sampling was conducted monthly throughout one year in the Ribeirão Grande stream, southeastern Brazil. Diet consisted of 31 food items with equal contribution of allochthonous and autochthonous items. Females were larger than males, and the mean sizes at first maturation were 4.44 cm and 3.92 cm, respectively. Based on 212 pairs of mature ovaries, the number of oocytes per female ranged from 538 to 6,727 (mean = 2,688.7). Niche width and niche overlap were not related to rainfall nor water temperature and only niche width increased with fish size, suggesting that as fish grow, more items are included in diet. Our results suggested that A. intermedius fit as a typical opportunistic strategist which may explain the prevalence of this species in several isolated headwater basins of vegetated Atlantic forested streams where food resources are abundant and distributed throughout the year.

  10. Ecology of exposed sandy beaches in northern Spain: Environmental factors controlling macrofauna communities

    NASA Astrophysics Data System (ADS)

    Lastra, M.; de La Huz, R.; Sánchez-Mata, A. G.; Rodil, I. F.; Aerts, K.; Beloso, S.; López, J.

    2006-02-01

    Thirty-four exposed sandy beaches on the northern coast of Spain (from 42°11' to 43°44'N, and from 2°04' to 8°52' W; ca. 1000 km) were sampled over a range of beach sizes, beach morphodynamics and exposure rates. Ten equally spaced intertidal shore levels along six replicated transects were sampled at each beach. Sediment and macrofauna samples were collected using corers to a depth of 15 cm. Morphodynamic characteristics such as the beach face slope, wave environment, exposure rates, Dean's parameter and Beach State Index were estimated. Biotic results indicated that in all the beaches the community was dominated by isopods, amphipods and polychaetes, mostly belonging to the detritivorous-opportunistic trophic group. The number of intertidal species ranged from 9 to 31, their density being between 31 and 618 individuals m - 2 , while individuals per linear metre (m - 1 ) ranged from 4962 to 17 2215. The biomass, calculated as total ash-free dry weight (AFDW) varied from 0.027 to 2.412 g m - 2 , and from 3.6 to 266.6 g m - 1 . Multiple regression analysis indicated that number of species significantly increased with proximity to the wind-driven upwelling zone located to the west, i.e., west-coast beaches hosted more species than east-coast beaches. The number of species increased with decreasing mean grain size and increasing beach length. The density of individuals m - 2 increased with decreasing mean grain size, while biomass m - 2 increased with increasing food availability estimated as chlorophyll-a concentration in the water column of the swash zone. Multiple-regression analysis indicated that chlorophyll-a in the water column increased with increasing western longitude. Additional insights provided by single-regression analysis showed a positive relationship between the number of species and chlorophyll-a, while increasing biomass occurred with increasing mean grain size of the beach. The results indicate that community characteristics in the exposed sandy beaches studied are affected by physical characteristics such as sediment size and beach length, but also by other factors dependent on coastal processes, such as food availability in the water column.

  11. Is the textural classification built on sand?

    USDA-ARS?s Scientific Manuscript database

    In 1967, the Committee of the Soil Science Society of America noted that the current system of particle size boundaries arose due to geographic accident. The committee noted that there is “no narrowly defineable natural particle size boundaries that would be equally significant in all soil materials...

  12. Advancing Research on Racial–Ethnic Health Disparities: Improving Measurement Equivalence in Studies with Diverse Samples

    PubMed Central

    Landrine, Hope; Corral, Irma

    2014-01-01

    To conduct meaningful, epidemiologic research on racial–ethnic health disparities, racial–ethnic samples must be rendered equivalent on other social status and contextual variables via statistical controls of those extraneous factors. The racial–ethnic groups must also be equally familiar with and have similar responses to the methods and measures used to collect health data, must have equal opportunity to participate in the research, and must be equally representative of their respective populations. In the absence of such measurement equivalence, studies of racial–ethnic health disparities are confounded by a plethora of unmeasured, uncontrolled correlates of race–ethnicity. Those correlates render the samples, methods, and measures incomparable across racial–ethnic groups, and diminish the ability to attribute health differences discovered to race–ethnicity vs. to its correlates. This paper reviews the non-equivalent yet normative samples, methodologies and measures used in epidemiologic studies of racial–ethnic health disparities, and provides concrete suggestions for improving sample, method, and scalar measurement equivalence. PMID:25566524

  13. Environmental assessment of an enhanced-oil-recovery steam generator equipped with a low-NOx burner. Volume 2. Data supplement. Final report, January 1984-January 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castaldini, C.; Waterland, L.R.; Lips, H.I.

    1986-02-01

    The report is a compendium of detailed test sampling and analysis data obtained in field tests of an enhanced-oil-recovery steam generator (EOR steamer) equipped with a MHI PM low-NOx crude oil burner. Test data reported include equipment calibration records, steamer operating data, and complete flue-gas emission results. Flue-gas emission measurements included continuous monitoring for criteria pollutants; onsite gas chromatography (GC) for volatile hydrocarbons (Cl-C6); Methods 5/8 sampling for particulate and SO/sub 2/ and SO/sub 3/ emissions; source assessment sampling system (SASS) for total organics in two boiling-point ranges (100 to 300 C and greater than or equal to 300 C),more » organic compound category information using infrared spectrometry (IR), and specific quantitation of the semivolatile organic priority pollutants using gas chromatography/mass spectrometry (GC/MS); Andersen impactor train measurements of emitted particle-size distribution; and N/sub 2/O emissions by gas chromatography/electron-capture detector (GC/ECD).« less

  14. Uranium mobility during interaction of rhyolitic obsidian, perlite and felsite with alkaline carbonate solution: T = 120° C, P = 210 kg/cm2

    USGS Publications Warehouse

    Zielinski, Robert A.

    1979-01-01

    Well-characterized samples of rhyolitic obsidian, perlite and felsite from a single lava flow are leached of U by alkaline oxidizing solutions under open-system conditions. Pressure, temperature, flow rate and solution composition are held constant in order to evaluate the relative importance of differences in surface area and crystallinity. Under the experimental conditions U removal from crushed glassy samples proceeds by a mechanism of glass dissolution in which U and silica are dissolved in approximately equal weight fractions. The rate of U removal from crushed glassy samples increases with decreasing average grain size (surface area). Initial rapid loss of a small component (≈ 2.5%) of the total U from crushed felsite. followed by much slower U loss, reflects variable rates of attack of numerous uranium sites. The fractions of U removed during the experiment ranged from 3.2% (felsite) to 27% (perlite). An empirical method for evaluating the relative rate of U loss from contemporaneous volcanic rocks is presented which incorporates leaching results and rock permeability data.

  15. How severe plastic deformation at cryogenic temperature affects strength, fatigue, and impact behaviour of grade 2 titanium

    NASA Astrophysics Data System (ADS)

    Mendes, Anibal; Kliauga, Andrea M.; Ferrante, Maurizio; Sordi, Vitor L.

    2014-08-01

    Samples of grade 2 Ti were processed by Equal Channel Angular Pressing (ECAP), either isolated or followed by further deformation by rolling at room temperature and at 170 K. The main interest of the present work was the evaluation of the effect of cryogenic rolling on tensile strength, fatigue limit and Charpy impact absorbed energy. Results show a progressive improvement of strength and endurance limit in the following order: ECAP; ECAP followed by room temperature rolling and ECAP followed by cryogenic rolling. From the examination of the fatigued samples a ductile fracture mode was inferred in all cases; also, the sample processed by cryogenic rolling showed very small and shallow dimples and a small fracture zone, confirming the agency of strength on the fatigue behaviour. The Charpy impact energy followed a similar pattern, with the exception that ECAP produced only a small improvement over the coarse-grained material. Motives for the efficiency of cryogenic deformation by rolling are the reduced grain size and the association of strength and ductility. The production of favourable deformation textures must also be considered.

  16. Two-sample binary phase 2 trials with low type I error and low sample size.

    PubMed

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A

    2017-04-30

    We address design of two-stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1  > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E⩾m, with two-sample rules of the form E - C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Continuous-time quantum Monte Carlo calculation of multiorbital vertex asymptotics

    NASA Astrophysics Data System (ADS)

    Kaufmann, Josef; Gunacker, Patrik; Held, Karsten

    2017-07-01

    We derive the equations for calculating the high-frequency asymptotics of the local two-particle vertex function for a multiorbital impurity model. These relate the asymptotics for a general local interaction to equal-time two-particle Green's functions, which we sample using continuous-time quantum Monte Carlo simulations with a worm algorithm. As specific examples we study the single-orbital Hubbard model and the three t2 g orbitals of SrVO3 within dynamical mean-field theory (DMFT). We demonstrate how the knowledge of the high-frequency asymptotics reduces the statistical uncertainties of the vertex and further eliminates finite-box-size effects. The proposed method benefits the calculation of nonlocal susceptibilities in DMFT and diagrammatic extensions of DMFT.

  18. New Trends in Gender and Mathematics Performance: A Meta-Analysis

    PubMed Central

    Lindberg, Sara M.; Hyde, Janet Shibley; Petersen, Jennifer L.; Linn, Marcia C.

    2010-01-01

    In this paper, we use meta-analysis to analyze gender differences in recent studies of mathematics performance. First, we meta-analyzed data from 242 studies published between 1990 and 2007, representing the testing of 1,286,350 people. Overall, d = .05, indicating no gender difference, and VR = 1.08, indicating nearly equal male and female variances. Second, we analyzed data from large data sets based on probability sampling of U.S. adolescents over the past 20 years: the NLSY, NELS88, LSAY, and NAEP. Effect sizes for the gender difference ranged between −0.15 and +0.22. Variance ratios ranged from 0.88 to 1.34. Taken together these findings support the view that males and females perform similarly in mathematics. PMID:21038941

  19. Equal Insistence of Proportion of Colour on a 2D Surface

    NASA Astrophysics Data System (ADS)

    Staig-Graham, B. N.

    2006-06-01

    Katz conducted experiments on Insistence and Equal Insistence, using an episcotister, chromatic, and achromatic papers which he viewed under different intensities of a light sources and chromatic illumination. His principle of Equal Insistence, combined with Goethe's reputed proportions of surface colours according to their luminosity, and Strzeminski's concept of Unism in painting inspire the author's current painting practice. However, a whole new route of research has been opened by the introduction of Time as a phenomenon of Equal Insitence and Image Perception Fading, under contolled conditions of observer movement at different distances, viewing angles, and illumination. Visual knowledge of Equal Insistence indicates, so far, several apparent changes to the properties of surface colours, and its actual effect upon the shape and size of paintings and symbolism. Typical of the investigation are the achromatic images of an elephant and a mouse.

  20. Are numbers, size and brightness equally efficient in orienting visual attention? Evidence from an eye-tracking study.

    PubMed

    Bulf, Hermann; Macchi Cassia, Viola; de Hevia, Maria Dolores

    2014-01-01

    A number of studies have shown strong relations between numbers and oriented spatial codes. For example, perceiving numbers causes spatial shifts of attention depending upon numbers' magnitude, in a way suggestive of a spatially oriented, mental representation of numbers. Here, we investigated whether this phenomenon extends to non-symbolic numbers, as well as to the processing of the continuous dimensions of size and brightness, exploring whether different quantitative dimensions are equally mapped onto space. After a numerical (symbolic Arabic digits or non-symbolic arrays of dots; Experiment 1) or a non-numerical cue (shapes of different size or brightness level; Experiment 2) was presented, participants' saccadic response to a target that could appear either on the left or the right side of the screen was registered using an automated eye-tracker system. Experiment 1 showed that, both in the case of Arabic digits and dot arrays, right targets were detected faster when preceded by large numbers, and left targets were detected faster when preceded by small numbers. Participants in Experiment 2 were faster at detecting right targets when cued by large-sized shapes and left targets when cued by small-sized shapes, whereas brightness cues did not modulate the detection of peripheral targets. These findings indicate that looking at a symbolic or a non-symbolic number induces attentional shifts to a peripheral region of space that is congruent with the numbers' relative position on a mental number line, and that a similar shift in visual attention is induced by looking at shapes of different size. More specifically, results suggest that, while the dimensions of number and size spontaneously map onto an oriented space, the dimension of brightness seems to be independent at a certain level of magnitude elaboration from the dimensions of spatial extent and number, indicating that not all continuous dimensions are equally mapped onto space.

  1. Classifier performance prediction for computer-aided diagnosis using a limited dataset.

    PubMed

    Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir

    2008-04-01

    In a practical classifier design problem, the true population is generally unknown and the available sample is finite-sized. A common approach is to use a resampling technique to estimate the performance of the classifier that will be trained with the available sample. We conducted a Monte Carlo simulation study to compare the ability of the different resampling techniques in training the classifier and predicting its performance under the constraint of a finite-sized sample. The true population for the two classes was assumed to be multivariate normal distributions with known covariance matrices. Finite sets of sample vectors were drawn from the population. The true performance of the classifier is defined as the area under the receiver operating characteristic curve (AUC) when the classifier designed with the specific sample is applied to the true population. We investigated methods based on the Fukunaga-Hayes and the leave-one-out techniques, as well as three different types of bootstrap methods, namely, the ordinary, 0.632, and 0.632+ bootstrap. The Fisher's linear discriminant analysis was used as the classifier. The dimensionality of the feature space was varied from 3 to 15. The sample size n2 from the positive class was varied between 25 and 60, while the number of cases from the negative class was either equal to n2 or 3n2. Each experiment was performed with an independent dataset randomly drawn from the true population. Using a total of 1000 experiments for each simulation condition, we compared the bias, the variance, and the root-mean-squared error (RMSE) of the AUC estimated using the different resampling techniques relative to the true AUC (obtained from training on a finite dataset and testing on the population). Our results indicated that, under the study conditions, there can be a large difference in the RMSE obtained using different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Under this type of conditions, the 0.632 and 0.632+ bootstrap methods have the lowest RMSE, indicating that the difference between the estimated and the true performances obtained using the 0.632 and 0.632+ bootstrap will be statistically smaller than those obtained using the other three resampling methods. Of the three bootstrap methods, the 0.632+ bootstrap provides the lowest bias. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited dataset.

  2. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio

  3. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  4. Why Barbie Feels Heavier than Ken: The Influence of Size-Based Expectancies and Social Cues on the Illusory Perception of Weight

    ERIC Educational Resources Information Center

    Dijker, Anton J. M.

    2008-01-01

    In order to examine the relative influence of size-based expectancies and social cues on the perceived weight of objects, two studies were performed, using equally weighing dolls differing in sex-related and age-related vulnerability or physical strength cues. To increase variation in perceived size, stimulus objects were viewed through optical…

  5. Wear Properties of ECAP-Processed AM80 Magnesium Alloy

    NASA Astrophysics Data System (ADS)

    Gopi, K. R.; Shivananda Nayaka, H.; Sahu, Sandeep

    2017-07-01

    AM80 magnesium alloy was subjected to equal-channel angular pressing (ECAP), and microstructural evolution was studied using scanning electron microscope (SEM). Grain size was found to decrease up to 3 µm after four passes. An increase in number of ECAP passes led to a corresponding increase in hardness of the processed samples. Unprocessed and ECAP-processed samples were subjected to wear test using pin-on-disk wear test machine to study the wear behavior. Effects of varying loads (30 and 40 N) with sliding distances (2500 and 5000 m) were studied. The results showed reduction in wear mass loss for the ECAP-processed samples in comparison with unprocessed condition. Coefficient of friction (COF) was studied for different loads, and improvement in COF values was observed for ECAP-processed samples compared to unprocessed condition. Worn surfaces were studied using SEM and energy-dispersive x-ray spectrometer, and they exhibited plastic deformation, delamination, plowing, wear debris and oxidation in the sliding direction. X-ray diffraction analysis was conducted on the worn surfaces to identify the phases. It revealed the presence of magnesium oxide and magnesium aluminum oxide which led to oxidation wear in the sliding direction. Wear mechanism was found to be abrasive and oxidation wear.

  6. Geographic structure of genetic variation in the widespread woodland grass Milium effusum L. A comparison between two regions with contrasting history and geomorphology.

    PubMed

    Tyler, Torbjörn

    2002-12-01

    Allozyme variation in the forest grass Milium effusum L. was studied in 21-23 populations within each of two equally sized densely sampled areas in northern and southern Sweden. In addition, 25 populations from other parts of Eurasia were studied for comparison. The structure of variation was analysed with both diversity statistics and measures based on allelic richness at a standardised sample size. The species was found to be highly variable, but no clear geographic patterns in the distribution of alleles or in overall genetic differentiation were found, either within the two regions or within the whole sample. Thus, no inferences about the direction of postglacial migration could be made. Obviously, migration and gene flow must have taken place in a manner capable of randomising the distribution of alleles. However, there were clear differences in levels and structuring of the variation between the two regions. Levels of variation, both in terms of genetic diversity and allelic richness, were lower in northern Sweden as compared with southern Sweden. In contrast, different measures of geographic structure all showed higher levels of population differentiation in the northern region. This is interpreted as due to different geomorphological conditions in the two regions, creating a relatively continuous habitat and gene flow in the southern region as compared with the northern region where the species, although common, is confined to narrow and mutually isolated corridors in the landscape.

  7. [Thin-section computed tomography of the bronchi; 2. Right upper lobe and left upper division].

    PubMed

    Matsuoka, Y; Ookubo, T; Ohtomo, K; Nishikawa, J; Kojima, K; Oyama, K; Yoshikawa, K; Iio, M

    1990-02-01

    Thin (2mm) section contiguous computed tomographic (CT) scans were obtained through the bronchi of the right upper lobe and the left upper division in 30 patients. All segmental bronchi were identified. The right subsegmental bronchi were identified in 100%, and the left subsegmental bronchi in 97%. The type of the orifice of the right bronchus was trifurcated (53%), the extension of B1 was apicoanterior (50%), and the size of B2b was equal to B3a (63%). The extension of the left B3 was subapicoanterior (38%), and the size of B1+2c was equal to B3a (62%).

  8. Effect of the size of charged spherical macroparticles on their electrostatic interaction in an equilibrium plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippov, A. V., E-mail: fav@triniti.ru; Derbenev, I. N.

    The effect of the size of two charged spherical macroparticles on their electrostatic interaction in an equilibrium plasma is analyzed within the linearized Poisson–Botzmann model. It is established that, under the interaction of two charged dielectric macroparticles in an equilibrium plasma, the forces acting on each particle turn out to be generally unequal. The forces become equal only in the case of conducting macroparticles or in the case of dielectric macroparticles of the same size and charge. They also turn out to be equal when the surface potentials of the macroparticles remain constant under the variation of interparticle distances. Formulasmore » are proposed that allow one to calculate the interaction force with a high degree of accuracy under the condition that the radii of macroparticles are much less than the screening length, which is usually satisfied in experiments with dusty plasmas.« less

  9. 7 CFR 3565.203 - Restrictions on rents.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... equal to 30 percent of 115 percent of area median income, adjusted for family size. In addition, on an annual basis, the average rent for a project, taking into account all individual unit rents, must not exceed 30 percent of 100 percent of area median income, adjusted for family size. ...

  10. 7 CFR 3565.203 - Restrictions on rents.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... equal to 30 percent of 115 percent of area median income, adjusted for family size. In addition, on an annual basis, the average rent for a project, taking into account all individual unit rents, must not exceed 30 percent of 100 percent of area median income, adjusted for family size. ...

  11. Coal-Fired Boilers at Navy Bases, Navy Energy Guidance Study, Phase II and III.

    DTIC Science & Technology

    1979-05-01

    several sizes were performed. Central plants containing four equal-sized boilers and central flue gas desulfurization facilities were shown to be less...Conceptual design and parametric cost studies of steam and power generation systems using coal-fired stoker boilers and stack gas scrubbers in

  12. 50 CFR 635.20 - Size limits.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... damaged by shark bites may be retained only if the length of the remainder of the fish is equal to or... after consideration of additional scientific information and fish measurement data, and will be made... otherwise adjusted. (e) Sharks. The following size limits change depending on the species being caught and...

  13. Mixed nano/micro-sized calcium phosphate composite and EDTA root surface etching improve availability of graft material in intrabony defects: an in vivo scanning electron microscopy evaluation.

    PubMed

    Gamal, Ahmed Y; Iacono, Vincent J

    2013-12-01

    The use of nanoparticles of graft materials may lead to breakthrough applications for periodontal regeneration. However, due to their small particle size, nanoparticles may be eliminated from periodontal defects by phagocytosis. In an attempt to improve nanoparticle retention in periodontal defects, the present in vivo study uses scanning electron microscopy (SEM) to evaluate the potential of micrograft particles of β-tricalcium phosphate (β-TCP) to enhance the binding and retention of nanoparticles of hydroxyapatite (nHA) on EDTA-treated and non-treated root surfaces in periodontal defects after 14 days of healing. Sixty patients having at least two hopeless periodontally affected teeth designated for extraction were randomly divided into four treatment groups (15 patients per group). Patients in group 1 had selected periodontal intrabony defects grafted with nHA of particle size 10 to 100 nm. Patients in group 2 were treated in a similar manner but had the affected roots etched for 2 minutes with a neutral 24% EDTA gel before grafting of the associated vertical defects with nHA. Patients in group 3 had the selected intrabony defects grafted with a composite graft consisting of equal volumes of nHA and β-TCP (particle size 63 to 150 nm). Patients in group 4 were treated as in group 3 but the affected roots were etched with neutral 24% EDTA as in group 2. For each of the four groups, one tooth was extracted immediately, and the second tooth was extracted after 14 days of healing for SEM evaluation. Fourteen days after surgery, all group 1 samples were devoid of any nanoparticles adherent to the root surfaces. Group 2 showed root surface areas 44.7% covered by a single layer of clot-blended grafted particles 14 days following graft application. After 14 days, group 3 samples appeared to retain fibrin strands devoid of grafted particles. Immediately extracted root samples of group 4 had adherent graft particles that covered a considerable area of the root surfaces (88.6%). Grafted particles appeared to cover all samples in a multilayered pattern. After 14 days, the group 4 extracted samples showed multilayered fibrin-covered nano/micro-sized graft particles adherent to the root surfaces (78.5%). The use of a composite graft consisting of nHA and microsized β-TCP after root surface treatment with 24% EDTA may be a suitable method to improve nHA retention in periodontal defects with subsequent graft bioreactivity.

  14. Spectral resolution enhancement of Fourier-transform spectrometer based on orthogonal shear interference using Wollaston prism

    NASA Astrophysics Data System (ADS)

    Cong, Lin-xiao; Huang, Min; Cai, Qi-sheng

    2017-10-01

    In this paper, a multi-line interferogram stitching method based on orthogonal shear using the Wollaston prism(WP) was proposed with a 2D projection interferogram recorded through the rotation of CCD, making the spectral resolution of Fourier-Transform spectrometer(FTS) of a limited spatial size increase by at least three times. The fringes on multi-lines were linked with the pixels of equal optical path difference (OPD). Ideally, the error of sampled phase within one pixel was less than half the wavelength, ensuring consecutive values in the over-sampled dimension while aliasing in another. In the simulation, with the calibration of 1.064μm, spectral lines at 1.31μm and 1.56μm of equal intensity were tested and observed. The result showed a bias of 0.13% at 1.31μm and 1.15% at 1.56μm in amplitude, and the FWHM at 1.31μm reduced from 25nm to 8nm after the sample points increased from 320 to 960. In the comparison of reflectance spectrum of carnauba wax within near infrared(NIR) band, the absorption peak at 1.2μm was more obvious and zoom of the band 1.38 1.43μm closer to the reference, although some fluctuation was in the short-wavelength region arousing the spectral crosstalk. In conclusion, with orthogonal shear based on the rotation of the CCD relative to the axis of WP, the spectral resolution of static FTS was enhanced by the projection of fringes to the grid coordinates and stitching the interferograms into a larger OPD, which showed the advantages of cost and miniaturization in the space-constrained NIR applications.

  15. Cognitive function and dementia in six areas of England and Wales: the distribution of MMSE and prevalence of GMS organicity level in the MRC CFA Study. The Medical Research Council Cognitive Function and Ageing Study (MRC CFAS).

    PubMed

    1998-03-01

    This two-stage prevalence survey involved geographically delimited areas, four urban (Liverpool, Newcastle, Nottingham and Oxford) and two rural (Cambridgeshire and Gwynedd), including institutions. Stratified random population samples of people in their 65th year and above, from Family Health Service Authorities were studied. The sample was stratified (65-74 years and > or = 75) to provide equal numbers. In Liverpool equal numbers in 5 year age groups were taken. After an initial screening interview, approximately 20% were selected on the basis of age, AGECAT organicity confidence level and MMSE score to proceed to a detailed assessment interview from which the full AGECAT organicity confidence level could be derived. Major influences on MMSE were confirmed as age, sex, social class and educational level. Estimates of prevalence of AGECAT O3 and above for each centre and the entire sample according to age are given, based on 1991 Census population structure, and suggest that around half a million (543,400) people in England and Wales would be defined as case level by this method. The five centres employing the same methodology showed no heterogeneity in prevalence. Prevalence of cognitive impairment and dementia appear not to vary widely across the centres examined in this study, which provides stable estimates by age and sex for AGECAT O3 and above, and norms for MMSE. Using these estimates as an indication of the size of the population affected, around 550,000 individuals in England and Wales would be expected to be suffering from dementia of mild or greater severity.

  16. Ultrafine-Grained Plates of Al-Mg-Si Alloy Obtained by Incremental Equal Channel Angular Pressing: Microstructure and Mechanical Properties

    NASA Astrophysics Data System (ADS)

    Lipinska, Marta; Chrominski, Witold; Olejnik, Lech; Golinski, Jacek; Rosochowski, Andrzej; Lewandowska, Malgorzata

    2017-10-01

    In this study, an Al-Mg-Si alloy was processed using via incremental equal channel angular pressing (I-ECAP) in order to obtain homogenous, ultrafine-grained plates with low anisotropy of the mechanical properties. This was the first attempt to process an Al-Mg-Si alloy using this technique. Samples in the form of 3 mm-thick square plates were subjected to I-ECAP with the 90 deg rotation around the axis normal to the surface of the plate between passes. Samples were investigated first in their initial state, then after a single pass of I-ECAP, and finally after four such passes. Analyses of the microstructure and mechanical properties demonstrated that the I-ECAP method can be successfully applied in Al-Mg-Si alloys. The average grain size decreased from 15 to 19 µm in the initial state to below 1 µm after four I-ECAP passes. The fraction of high-angle grain boundaries in the sample subjected to four I-ECAP passes lay within 53 to 57 pct depending on the examined plane. The mechanism of grain refinement in Al-Mg-Si alloy was found to be distinctly different from that in pure aluminum with the grain rotation being more prominent than the grain subdivision, which was attributed to lower stacking fault energy and the reduced mobility of dislocations in the alloy. The ultimate tensile strength increased more than twice, whereas the yield strength was more than threefold. Additionally, the plates processed by I-ECAP exhibited low anisotropy of mechanical properties (in plane and across the thickness) in comparison to other SPD processing methods, which makes them attractive for further processing and applications.

  17. CAPN1, CAST, and DGAT1 genetic effects on preweaning performance, carcass quality traits, and residual variance of tenderness in a beef cattle population selected for haplotype and allele equalization

    USDA-ARS?s Scientific Manuscript database

    Genetic marker effects and type of inheritance are estimated with poor precision when minor marker allele frequencies are low. A stable composite population (MARC III) was subjected to marker assisted selection for multiple years to equalize specific marker frequencies to 1) estimate effect size an...

  18. The Impact of Hospital Size on CMS Hospital Profiling.

    PubMed

    Sosunov, Eugene A; Egorova, Natalia N; Lin, Hung-Mo; McCardle, Ken; Sharma, Vansh; Gelijns, Annetine C; Moskowitz, Alan J

    2016-04-01

    The Centers for Medicare & Medicaid Services (CMS) profile hospitals using a set of 30-day risk-standardized mortality and readmission rates as a basis for public reporting. These measures are affected by hospital patient volume, raising concerns about uniformity of standards applied to providers with different volumes. To quantitatively determine whether CMS uniformly profile hospitals that have equal performance levels but different volumes. Retrospective analysis of patient-level and hospital-level data using hierarchical logistic regression models with hospital random effects. Simulation of samples including a subset of hospitals with different volumes but equal poor performance (hospital effects=+3 SD in random-effect logistic model). A total of 1,085,568 Medicare fee-for-service patients undergoing 1,494,993 heart failure admissions in 4930 hospitals between July 1, 2005 and June 30, 2008. CMS methodology was used to determine the rank and proportion (by volume) of hospitals reported to perform "Worse than US National Rate." Percent of hospitals performing "Worse than US National Rate" was ∼40 times higher in the largest (fifth quintile by volume) compared with the smallest hospitals (first quintile). A similar gradient was seen in a cohort of 100 hospitals with simulated equal poor performance (0%, 0%, 5%, 20%, and 85% in quintiles 1 to 5) effectively leaving 78% of poor performers undetected. Our results illustrate the disparity of impact that the current CMS method of hospital profiling has on hospitals with higher volumes, translating into lower thresholds for detection and reporting of poor performance.

  19. High Temperature Deformation of Twin-Roll Cast Al-Mn-Based Alloys after Equal Channel Angular Pressing.

    PubMed

    Málek, Přemysl; Šlapáková Poková, Michaela; Cieslar, Miroslav

    2015-11-12

    Twin roll cast Al-Mn- and Al-Mn-Zr-based alloys were subjected to four passes of equal channel angular pressing. The resulting grain size of 400 nm contributes to a significant strengthening at room temperature. This microstructure is not fully stable at elevated temperatures and recrystallization and vast grain growth occur at temperatures between 350 and 450 °C. The onset of these microstructure changes depends on chemical and phase composition. Better stability is observed in the Al-Mn-Zr-based alloy. High temperature tensile tests reveal that equal channel angular pressing results in a softening of all studied materials at high temperatures. This can be explained by an active role of grain boundaries in the deformation process. The maximum values of ductility and strain rate sensitivity parameter m found in the Al-Mn-Zr-based alloy are below the bottom limit of superplasticity (155%, m = 0.25). However, some features typical for superplastic behavior were observed-the strain rate dependence of the parameter m , the strengthening with increasing grain size, and the fracture by diffuse necking. Grain boundary sliding is believed to contribute partially to the overall strain in specimens where the grain size remained in the microcrystalline range.

  20. Precision sizing of moving large particles using diffraction splitting of Doppler lines

    NASA Astrophysics Data System (ADS)

    Kononenko, Vadim L.

    1999-02-01

    It is shown, that the Doppler line from a single large particle moving with a constant velocity through a finite- width laser beam, undergoes a doublet-type splitting under specific observation conditions. A general requirement is that particle size 2a is not negligibly small, compared with beam diameter 2w$0. Three optical mechanisms of line splitting are considered. The first one is based on nonsymmetric diffraction of a bounded laser beam by a moving particle. The second arises from the transient geometry of diffraction. The third mechanism, of photometric nature, originates from specific time variation of total illuminance of moving particles when 2a>Lambda, the interference fringe spacing in the measuring volume. The diffraction splitting is observed when a detector is placed near one of diffraction minima corresponding to either of probing beams, and 2a equals (n0.5)Lambda for n equals 1,2. The photometric splitting is observed with an image-forming optics, when 2a equals n(Lambda) . That gives the possibility of distant particles sizing based on the Doppler line splitting phenomenon. A general theory of line splitting is developed, and used to explain the experimental observations quantitatively. The influence of the scattering angels and observation angle on the line splitting characteristics is studied analytically and numerically.

  1. High Temperature Deformation of Twin-Roll Cast Al-Mn-Based Alloys after Equal Channel Angular Pressing

    PubMed Central

    Málek, Přemysl; Šlapáková Poková, Michaela; Cieslar, Miroslav

    2015-01-01

    Twin roll cast Al-Mn- and Al-Mn-Zr-based alloys were subjected to four passes of equal channel angular pressing. The resulting grain size of 400 nm contributes to a significant strengthening at room temperature. This microstructure is not fully stable at elevated temperatures and recrystallization and vast grain growth occur at temperatures between 350 and 450 °C. The onset of these microstructure changes depends on chemical and phase composition. Better stability is observed in the Al-Mn-Zr-based alloy. High temperature tensile tests reveal that equal channel angular pressing results in a softening of all studied materials at high temperatures. This can be explained by an active role of grain boundaries in the deformation process. The maximum values of ductility and strain rate sensitivity parameter m found in the Al-Mn-Zr-based alloy are below the bottom limit of superplasticity (155%, m = 0.25). However, some features typical for superplastic behavior were observed—the strain rate dependence of the parameter m, the strengthening with increasing grain size, and the fracture by diffuse necking. Grain boundary sliding is believed to contribute partially to the overall strain in specimens where the grain size remained in the microcrystalline range. PMID:28793667

  2. The Effectiveness of Dance Interventions on Physical Health Outcomes Compared to Other Forms of Physical Activity: A Systematic Review and Meta-Analysis.

    PubMed

    Fong Yan, Alycia; Cobley, Stephen; Chan, Cliffton; Pappas, Evangelos; Nicholson, Leslie L; Ward, Rachel E; Murdoch, Roslyn E; Gu, Yu; Trevor, Bronwyn L; Vassallo, Amy Jo; Wewege, Michael A; Hiller, Claire E

    2018-04-01

    Physical inactivity is one of the key global health challenges as it is associated with adverse effects related to ageing, weight control, physical function, longevity, and quality of life. Dancing is a form of physical activity associated with health benefits across the lifespan, even at amateur levels of participation. However, it is unclear whether dance interventions are equally as effective as other forms of physical activity. The aim was to systematically review the literature on the effectiveness of structured dance interventions, in comparison to structured exercise programmes, on physical health outcome measures. Seven databases were searched from earliest records to 4 August 2017. Studies investigating dance interventions lasting > 4 weeks that included physical health outcomes and had a structured exercise comparison group were included in the study. Screening and data extraction were performed by two reviewers, with all disagreements resolved by the primary author. Where appropriate, meta-analysis was performed or an effect size estimate generated. Of 11,434 studies identified, 28 (total sample size 1276 participants) met the inclusion criteria. A variety of dance genres and structured exercise interventions were compared. Meta-analyses showed dance interventions significantly improved body composition, blood biomarkers, and musculoskeletal function. The effect of either intervention on cardiovascular function and self-perceived mobility was equivalent. Undertaking structured dance of any genre is equally and occasionally more effective than other types of structured exercise for improving a range of health outcome measures. Health practitioners can recommend structured dance as a safe and effective exercise alternative.

  3. Probe measurements and numerical model predictions of evolving size distributions in premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Filippo, A.; Sgro, L.A.; Lanzuolo, G.

    2009-09-15

    Particle size distributions (PSDs), measured with a dilution probe and a Differential Mobility Analyzer (DMA), and numerical predictions of these PSDs, based on a model that includes only coagulation or alternatively inception and coagulation, are compared to investigate particle growth processes and possible sampling artifacts in the post-flame region of a C/O = 0.65 premixed laminar ethylene-air flame. Inputs to the numerical model are the PSD measured early in the flame (the initial condition for the aerosol population) and the temperature profile measured along the flame's axial centerline. The measured PSDs are initially unimodal, with a modal mobility diameter ofmore » 2.2 nm, and become bimodal later in the post-flame region. The smaller mode is best predicted with a size-dependent coagulation model, which allows some fraction of the smallest particles to escape collisions without resulting in coalescence or coagulation through the size-dependent coagulation efficiency ({gamma}{sub SD}). Instead, when {gamma} = 1 and the coagulation rate is equal to the collision rate for all particles regardless of their size, the coagulation model significantly under predicts the number concentration of both modes and over predicts the size of the largest particles in the distribution compared to the measured size distributions at various heights above the burner. The coagulation ({gamma}{sub SD}) model alone is unable to reproduce well the larger particle mode (mode II). Combining persistent nucleation with size-dependent coagulation brings the predicted PSDs to within experimental error of the measurements, which seems to suggest that surface growth processes are relatively insignificant in these flames. Shifting measured PSDs a few mm closer to the burner surface, generally adopted to correct for probe perturbations, does not produce a better matching between the experimental and the numerical results. (author)« less

  4. Gender Differences in Sustained Attentional Control Relate to Gender Inequality across Countries

    PubMed Central

    Riley, Elizabeth; Okabe, Hidefusa; Germine, Laura; Wilmer, Jeremy; Esterman, Michael; DeGutis, Joseph

    2016-01-01

    Sustained attentional control is critical for everyday tasks and success in school and employment. Understanding gender differences in sustained attentional control, and their potential sources, is an important goal of psychology and neuroscience and of great relevance to society. We used a large web-based sample (n = 21,484, from testmybrain.org) to examine gender differences in sustained attentional control. Our sample included participants from 41 countries, allowing us to examine how gender differences in each country relate to national indices of gender equality. We found significant gender differences in certain aspects of sustained attentional control. Using indices of gender equality, we found that overall sustained attentional control performance was lower in countries with less equality and that there were greater gender differences in performance in countries with less equality. These findings suggest that creating sociocultural conditions which value women and men equally can improve a component of sustained attention and reduce gender disparities in cognition. PMID:27802294

  5. Gender Differences in Sustained Attentional Control Relate to Gender Inequality across Countries.

    PubMed

    Riley, Elizabeth; Okabe, Hidefusa; Germine, Laura; Wilmer, Jeremy; Esterman, Michael; DeGutis, Joseph

    2016-01-01

    Sustained attentional control is critical for everyday tasks and success in school and employment. Understanding gender differences in sustained attentional control, and their potential sources, is an important goal of psychology and neuroscience and of great relevance to society. We used a large web-based sample (n = 21,484, from testmybrain.org) to examine gender differences in sustained attentional control. Our sample included participants from 41 countries, allowing us to examine how gender differences in each country relate to national indices of gender equality. We found significant gender differences in certain aspects of sustained attentional control. Using indices of gender equality, we found that overall sustained attentional control performance was lower in countries with less equality and that there were greater gender differences in performance in countries with less equality. These findings suggest that creating sociocultural conditions which value women and men equally can improve a component of sustained attention and reduce gender disparities in cognition.

  6. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  7. Equality of Opportunity and Equality of Outcome

    ERIC Educational Resources Information Center

    Kodelja, Zdenko

    2016-01-01

    The report on the findings of extensive empirical research on equality of educational opportunities carried out in the United States on a very large sample of public schools by Coleman and his colleagues has had a major impact on education policy and has given rise to a large amount of research and various interpretations. However, as some…

  8. Animal Board Invited Review: Comparing conventional and organic livestock production systems on different aspects of sustainability.

    PubMed

    van Wagenberg, C P A; de Haas, Y; Hogeveen, H; van Krimpen, M M; Meuwissen, M P M; van Middelaar, C E; Rodenburg, T B

    2017-10-01

    To sustainably contribute to food security of a growing and richer world population, livestock production systems are challenged to increase production levels while reducing environmental impact, being economically viable, and socially responsible. Knowledge about the sustainability performance of current livestock production systems may help to formulate strategies for future systems. Our study provides a systematic overview of differences between conventional and organic livestock production systems on a broad range of sustainability aspects and animal species available in peer-reviewed literature. Systems were compared on economy, productivity, environmental impact, animal welfare and public health. The review was limited to dairy cattle, beef cattle, pigs, broilers and laying hens, and to Europe, North America and New Zealand. Results per indicators are presented as in the articles without performing additional calculations. Out of 4171 initial search hits, 179 articles were analysed. Studies varied widely in indicators, research design, sample size and location and context. Quite some studies used small samples. No study analysed all aspects of sustainability simultaneously. Conventional systems had lower labour requirements per unit product, lower income risk per animal, higher production per animal per time unit, higher reproduction numbers, lower feed conversion ratio, lower land use, generally lower acidification and eutrophication potential per unit product, equal or better udder health for cows and equal or lower microbiological contamination. Organic systems had higher income per animal or full time employee, lower impact on biodiversity, lower eutrophication and acidification potential per unit land, equal or lower likelihood of antibiotic resistance in bacteria and higher beneficial fatty acid levels in cow milk. For most sustainability aspects, sometimes conventional and sometimes organic systems performed better, except for productivity, which was consistently higher in conventional systems. For many aspects and animal species, more data are needed to conclude on a difference between organic and conventional livestock production systems.

  9. Comprehension and Use of Nutrition Facts Tables among Adolescents and Young Adults in Canada.

    PubMed

    Hobin, Erin; Shen-Tu, Grace; Sacco, Jocelyn; White, Christine; Bowman, Carolyn; Sheeshka, Judy; Mcvey, Gail; O'Brien, Mary Fodor; Vanderlee, Lana; Hammond, David

    2016-06-01

    Limited evidence exists on the comprehension and use of Nutrition Facts tables (NFt) among adolescents and young adults. This study provides an account of how young people engage with, understand, and apply nutrition information on the current and modified versions of the NFt to compare and choose foods. Participants aged 16-24 years (n = 26) were asked to "think aloud" while viewing either the current or 1 of 5 modified NFts and completing a behavioural task. The task included a questionnaire with 9 functional items requiring participants to define, compare, interpret, and manipulate serving size and percentage daily value (%DV) information on NFts. Semi-structured interviews were conducted to further probe thought processes and difficulties experienced in completing the task. Equal serving sizes on NFts improved ability to accurately compare nutrition information between products. Most participants could define %DV and believed it can be used to compare foods, yet some confusion persisted when interpreting %DVs and manipulating serving-size information on NFts. Where serving sizes were unequal, mathematical errors were often responsible for incorrect responses. Results reinforce the need for equal serving sizes on NFts of similar products and highlight young Canadians' confusion when using nutrition information on NFts.

  10. Impact of geometrical properties on permeability and fluid phase distribution in porous media

    NASA Astrophysics Data System (ADS)

    Lehmann, P.; Berchtold, M.; Ahrenholz, B.; Tölke, J.; Kaestner, A.; Krafczyk, M.; Flühler, H.; Künsch, H. R.

    2008-09-01

    To predict fluid phase distribution in porous media, the effect of geometric properties on flow processes must be understood. In this study, we analyze the effect of volume, surface, curvature and connectivity (the four Minkowski functionals) on the hydraulic conductivity and the water retention curve. For that purpose, we generated 12 artificial structures with 800 3 voxels (the units of a 3D image) and compared them with a scanned sand sample of the same size. The structures were generated with a Boolean model based on a random distribution of overlapping ellipsoids whose size and shape were chosen to fulfill the criteria of the measured functionals. The pore structure of sand material was mapped with X-rays from synchrotrons. To analyze the effect of geometry on water flow and fluid distribution we carried out three types of analysis: Firstly, we computed geometrical properties like chord length, distance from the solids, pore size distribution and the Minkowski functionals as a function of pore size. Secondly, the fluid phase distribution as a function of the applied pressure was calculated with a morphological pore network model. Thirdly, the permeability was determined using a state-of-the-art lattice-Boltzmann method. For the simulated structure with the true Minkowski functionals the pores were larger and the computed air-entry value of the artificial medium was reduced to 85% of the value obtained from the scanned sample. The computed permeability for the geometry with the four fitted Minkowski functionals was equal to the permeability of the scanned image. The permeability was much more sensitive to the volume and surface than to curvature and connectivity of the medium. We conclude that the Minkowski functionals are not sufficient to characterize the geometrical properties of a porous structure that are relevant for the distribution of two fluid phases. Depending on the procedure to generate artificial structures with predefined Minkowski functionals, structures differing in pore size distribution can be obtained.

  11. Multigrid contact detection method

    NASA Astrophysics Data System (ADS)

    He, Kejing; Dong, Shoubin; Zhou, Zhaoyao

    2007-03-01

    Contact detection is a general problem of many physical simulations. This work presents a O(N) multigrid method for general contact detection problems (MGCD). The multigrid idea is integrated with contact detection problems. Both the time complexity and memory consumption of the MGCD are O(N) . Unlike other methods, whose efficiencies are influenced strongly by the object size distribution, the performance of MGCD is insensitive to the object size distribution. We compare the MGCD with the no binary search (NBS) method and the multilevel boxing method in three dimensions for both time complexity and memory consumption. For objects with similar size, the MGCD is as good as the NBS method, both of which outperform the multilevel boxing method regarding memory consumption. For objects with diverse size, the MGCD outperform both the NBS method and the multilevel boxing method. We use the MGCD to solve the contact detection problem for a granular simulation system based on the discrete element method. From this granular simulation, we get the density property of monosize packing and binary packing with size ratio equal to 10. The packing density for monosize particles is 0.636. For binary packing with size ratio equal to 10, when the number of small particles is 300 times as the number of big particles, the maximal packing density 0.824 is achieved.

  12. 75 FR 80117 - Methods for Measurement of Filterable PM10

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-21

    ...This action promulgates amendments to Methods 201A and 202. The final amendments to Method 201A add a particle-sizing device to allow for sampling of particulate matter with mean aerodynamic diameters less than or equal to 2.5 micrometers (PM2.5 or fine particulate matter). The final amendments to Method 202 revise the sample collection and recovery procedures of the method to reduce the formation of reaction artifacts that could lead to inaccurate measurements of condensable particulate matter. Additionally, the final amendments to Method 202 eliminate most of the hardware and analytical options in the existing method, thereby increasing the precision of the method and improving the consistency in the measurements obtained between source tests performed under different regulatory authorities. This action also announces that EPA is taking no action to affect the already established January 1, 2011 sunset date for the New Source Review (NSR) transition period, during which EPA is not requiring that State NSR programs address condensable particulate matter emissions.

  13. Triple galaxies and a hidden mass problem

    NASA Technical Reports Server (NTRS)

    Karachentsev, I. D.; Karachentseva, V. E.; Lebedev, V. S.

    1990-01-01

    The authors consider a homogeneous sample of 84 triple systems of galaxies with components brighter than m = 15.7, located in the northern sky and satisfying an isolation criterion with respect to neighboring galaxies in projection. The distributions of basic dynamical parameters for triplets have median values as follows: radial velocity dispersion 133 km/s, mean harmonic radius 63 kpc, absolute magnitude of galaxies M sub B equals -20.38, crossing time tau = 0.04 H(sup minus 1). For different ways of estimation the median mass-to-luminosity ratio is (20 - 30). A comparison of the last value with the ones for single and binary galaxies shows the presence of a virial mass excess for triplets by a factor 4. The mass-to-luminosity ratio is practically uncorrelated with linear size of triplets or with morphological types of their components. We note that a significant part of the virial excess may be explained by the presence of nonisolated triple configurations in the sample, which are produced by debris of more populous groups of galaxies.

  14. Dithiothreitol-based protein equalization technology to unravel biomarkers for bladder cancer.

    PubMed

    Araújo, J E; López-Fernández, H; Diniz, M S; Baltazar, Pedro M; Pinheiro, Luís Campos; da Silva, Fernando Calais; Carrascal, Mylène; Videira, Paula; Santos, H M; Capelo, J L

    2018-04-01

    This study aimed to assess the benefits of dithiothreitol (DTT)-based sample treatment for protein equalization to assess potential biomarkers for bladder cancer. The proteome of plasma samples of patients with bladder carcinoma, patients with lower urinary tract symptoms (LUTS) and healthy volunteers, was equalized with dithiothreitol (DTT) and compared. The equalized proteomes were interrogated using two-dimensional gel electrophoresis and matrix assisted laser desorption ionization time of flight mass spectrometry. Six proteins, namely serum albumin, gelsolin, fibrinogen gamma chain, Ig alpha-1 chain C region, Ig alpha-2 chain C region and haptoglobin, were found dysregulated in at least 70% of bladder cancer patients when compared with a pool of healthy individuals. One protein, serum albumin, was found overexpressed in 70% of the patients when the equalized proteome of the healthy pool was compared with the equalized proteome of the LUTS patients. The pathways modified by the proteins differentially expressed were analyzed using Cytoscape. The method here presented is fast, cheap, of easy application and it matches the analytical minimalism rules as outlined by Halls. Orthogonal validation was done using western-blot. Overall, DTT-based protein equalization is a promising methodology in bladder cancer research. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. To Investigate the Absorption, Dynamic Contact Angle and Printability Effects of Synthetic Zeolite Pigments in an Inkjet Receptive Coating

    NASA Astrophysics Data System (ADS)

    Jalindre, Swaraj Sunil

    Ink absorption performance in inkjet receptive coatings containing synthetic zeolite pigments was studied. Coating pigment pore and particle size distribution are the key parameters that influence in modifying media surface properties, thus affecting the rate of ink penetration and drying time (Scholkopf, et al. 2004). The primary objective of this study was: (1) to investigate the synthetic zeolite pigment effects on inkjet ink absorption, dynamic contact angle and printability, and (2) to evaluate these novel synthetic zeolite pigments in replacing the fumed silica pigments in conventional inkjet receptive coatings. In this research study, single pigment coating formulations (in equal P:B ratio) were prepared using microporous synthetic zeolite pigments (5A, Organophilic and 13X) and polyvinyl alcohol (PVOH) binder. The laboratory-coated samples were characterized for absorption, air permeance, roughness, drying time, wettability and print fidelity. Based on the rheological data, it was found that the synthetic zeolite formulated coatings depicted a Newtonian flow behavior at low shear; while the industry accepted fumed silica based coatings displayed a characteristically high pseudoplastic flow behavior. Our coated samples generated using microporous synthetic zeolite pigments produced low absorption, reduced wettability and accelerated ink drying characteristics. These characteristics were caused due to the synthetic zeolite pigments, which resulted in relatively closed surface structure coated samples. The research suggested that no single selected synthetic zeolite coating performed better than the conventional fumed silica based coatings. Experimental data also showed that there was no apparent relationship between synthetic zeolite pigment pore sizes and inkjet ink absorption. For future research, above coated samples should be evaluated for pore size distribution using Mercury Porosimeter, which quantifies surface porosity of coated samples. This presented approach can be easily used for investigating other such microporous coating pigments in formulating inkjet receptive coating. The research findings will benefit the coating formulators, engineers and material science students, in understanding the absorption characteristics of selected synthetic zeolite pigments thereby encouraging them in identifying other such alternative pigments in conventional inkjet receptive coatings.

  16. Using lod scores to detect sex differences in male-female recombination fractions.

    PubMed

    Feenstra, B; Greenberg, D A; Hodge, S E

    2004-01-01

    Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel

  17. Occupancy in continuous habitat

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.

    2012-01-01

    The probability that a site has at least one individual of a species ('occupancy') has come to be widely used as a state variable for animal population monitoring. The available statistical theory for estimation when detection is imperfect applies particularly to habitat patches or islands, although it is also used for arbitrary plots in continuous habitat. The probability that such a plot is occupied depends on plot size and home-range characteristics (size, shape and dispersion) as well as population density. Plot size is critical to the definition of occupancy as a state variable, but clear advice on plot size is missing from the literature on the design of occupancy studies. We describe models for the effects of varying plot size and home-range size on expected occupancy. Temporal, spatial, and species variation in average home-range size is to be expected, but information on home ranges is difficult to retrieve from species presence/absence data collected in occupancy studies. The effect of variable home-range size is negligible when plots are very large (>100 x area of home range), but large plots pose practical problems. At the other extreme, sampling of 'point' plots with cameras or other passive detectors allows the true 'proportion of area occupied' to be estimated. However, this measure equally reflects home-range size and density, and is of doubtful value for population monitoring or cross-species comparisons. Plot size is ill-defined and variable in occupancy studies that detect animals at unknown distances, the commonest example being unlimited-radius point counts of song birds. We also find that plot size is ill-defined in recent treatments of "multi-scale" occupancy; the respective scales are better interpreted as temporal (instantaneous and asymptotic) rather than spatial. Occupancy is an inadequate metric for population monitoring when it is confounded with home-range size or detection distance.

  18. Comparing two books and establishing probably efficacious treatment for low sexual desire.

    PubMed

    Balzer, Alexandra M; Mintz, Laurie B

    2015-04-01

    Using a sample of 45 women, this study compared the effectiveness of a previously studied (Mintz, Balzer, Zhao, & Bush, 2012) bibliotherapy intervention (Mintz, 2009), a similar self-help book (Hall, 2004), and a wait-list control (WLC) group. To examine intervention effectiveness, between and within group standardized effect sizes (interpreted with Cohen's, 1988 benchmarks .20 = small, .50 = medium, .80+ = large) and their confidence limits are used. In comparison to the WLC group, both interventions yielded large between-group posttest effect sizes on a measure of sexual desire. Additionally, large between-group posttest effect sizes were found for sexual satisfaction and lubrication among those reading the Mintz book. When examining within-group pretest to posttest effect sizes, medium to large effects were found for desire, lubrication, and orgasm for both books and for satisfaction and arousal for those reading the Mintz book. When directly comparing the books, all between-group posttest effect sizes were likely obtained by chance. It is concluded that both books are equally effective in terms of the outcome of desire, but whether or not there is differential efficacy in terms of other domains of sexual functioning is equivocal. Tentative evidence is provided for the longer term effectiveness of both books in enhancing desire. Arguing for applying criteria for empirically supported treatments to self-help, results are purported to establish the Mintz book as probably efficacious and to comprise a first step in this designation for the Hall book. (c) 2015 APA, all rights reserved).

  19. Measuring β-diversity with species abundance data.

    PubMed

    Barwell, Louise J; Isaac, Nick J B; Kunin, William E

    2015-07-01

    In 2003, 24 presence-absence β-diversity metrics were reviewed and a number of trade-offs and redundancies identified. We present a parallel investigation into the performance of abundance-based metrics of β-diversity. β-diversity is a multi-faceted concept, central to spatial ecology. There are multiple metrics available to quantify it: the choice of metric is an important decision. We test 16 conceptual properties and two sampling properties of a β-diversity metric: metrics should be 1) independent of α-diversity and 2) cumulative along a gradient of species turnover. Similarity should be 3) probabilistic when assemblages are independently and identically distributed. Metrics should have 4) a minimum of zero and increase monotonically with the degree of 5) species turnover, 6) decoupling of species ranks and 7) evenness differences. However, complete species turnover should always generate greater values of β than extreme 8) rank shifts or 9) evenness differences. Metrics should 10) have a fixed upper limit, 11) symmetry (βA,B  = βB,A ), 12) double-zero asymmetry for double absences and double presences and 13) not decrease in a series of nested assemblages. Additionally, metrics should be independent of 14) species replication 15) the units of abundance and 16) differences in total abundance between sampling units. When samples are used to infer β-diversity, metrics should be 1) independent of sample sizes and 2) independent of unequal sample sizes. We test 29 metrics for these properties and five 'personality' properties. Thirteen metrics were outperformed or equalled across all conceptual and sampling properties. Differences in sensitivity to species' abundance lead to a performance trade-off between sample size bias and the ability to detect turnover among rare species. In general, abundance-based metrics are substantially less biased in the face of undersampling, although the presence-absence metric, βsim , performed well overall. Only βBaselga R turn , βBaselga B-C turn and βsim measured purely species turnover and were independent of nestedness. Among the other metrics, sensitivity to nestedness varied >4-fold. Our results indicate large amounts of redundancy among existing β-diversity metrics, whilst the estimation of unseen shared and unshared species is lacking and should be addressed in the design of new abundance-based metrics. © 2015 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  20. 50 CFR 665.812 - Sea turtle take mitigation measures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....812 Section 665.812 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... of hook sizes and styles used by the vessel. (B) Extended reach handle. The hook removal device must... hook sizes and styles used by the vessel. (B) Handle. The handle must have a length equal to or greater...

  1. Small-Scale Drop-Size Variability: Empirical Models for Drop-Size-Dependent Clustering in Clouds

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Knyazikhin, Yuri; Larsen, Michael L.; Wiscombe, Warren J.

    2005-01-01

    By analyzing aircraft measurements of individual drop sizes in clouds, it has been shown in a companion paper that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)), where 0 less than or equals D(r) less than or equals 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and a Poisson distribution of cloud drops, these models illustrate strong drop clustering, especially with larger drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics, including how fast rain can form. For radiative transfer theory, clustering of large drops enhances their impact on the cloud optical path. The clustering phenomenon also helps explain why remotely sensed cloud drop size is generally larger than that measured in situ.

  2. Staged fluidized bed

    DOEpatents

    Mallon, Richard G.

    1984-01-01

    Method and apparatus for narrowing the distribution of residence times of any size particle and equalizing the residence times of large and small particles in fluidized beds. Particles are moved up one fluidized column and down a second fluidized column with the relative heights selected to equalize residence times of large and small particles. Additional pairs of columns are staged to narrow the distribution of residence times and provide complete processing of the material.

  3. Highly luminescent material based on Alq3:Ag nanoparticles.

    PubMed

    Salah, Numan; Habib, Sami S; Khan, Zishan H

    2013-09-01

    Tris (8-hydroxyquinoline) aluminum (Alq3) is an organic semiconductor molecule, widely used as an electron transport layer, light emitting layer in organic light-emitting diodes and a host for fluorescent and phosphorescent dyes. In this work thin films of pure and silver (Ag), cupper (Cu), terbium (Tb) doped Alq3 nanoparticles were synthesized using the physical vapor condensation method. They were fabricated on glass substrates and characterized by X-ray diffraction, scanning electron microscope (SEM), energy dispersive spectroscopy, atomic force microscope (AFM), UV-visible absorption spectra and studied for their photoluminescence (PL) properties. SEM and AFM results show spherical nanoparticles with size around 70-80 nm. These nanoparticles have almost equal sizes and a homogeneous size distribution. The maximum absorption of Alq3 nanoparticles is observed at 300 nm, while the surface plasmon resonant band of Ag doped sample appears at 450 nm. The PL emission spectra of Tb, Cu and Ag doped Alq3 nanoparticles show a single broad band at around 515 nm, which is similar to that of the pure one, but with enhanced PL intensity. The sample doped with Ag at a concentration ratio of Alq3:Ag = 1:0.8 is found to have the highest PL intensity, which is around 2 times stronger than that of the pure one. This enhancement could be attributed to the surface plasmon resonance of Ag ions that might have increased the absorption and then the quantum yield. These remarkable result suggest that Alq3 nanoparticles incorporated with Ag ions might be quite useful for future nano-optoelectronic devices.

  4. Thermal Microstructural Stability of AZ31 Magnesium after Severe Plastic Deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, John P.; Askari, Hesam A.; Hovanski, Yuri

    2015-03-01

    Both equal channel angular pressing and friction stir processing have the ability to refine the grain size of twin roll cast AZ31 magnesium and potentially improve its superplastic properties. This work used isochronal and isothermal heat treatments to investigate the microstructural stability of twin roll cast, equal channel angular pressed and friction stir processed AZ31 magnesium. For both heat treatment conditions, it was found that the twin roll casted and equal channel angular pressed materials were more stable than the friction stir processed material. Calculations of the grain growth kinetics showed that severe plastic deformation processing decreased the activation energymore » for grain boundary motion with the equal channel angular pressed material having the greatest Q value of the severely plastically deformed materials and that increasing the tool travel speed of the friction stir processed material improved microstructural stability. The Hollomon-Jaffe parameter was found to be an accurate means of identifying the annealing conditions that will result in substantial grain growth and loss of potential superplastic properties in the severely plastically deformed materials. In addition, Humphreys’s model of cellular microstructural stability accurately predicted the relative microstructural stability of the severely plastically deformed materials and with some modification, closely predicted the maximum grain size ratio achieved by the severely plastically deformed materials.« less

  5. A fast adaptive convex hull algorithm on two-dimensional processor arrays with a reconfigurable BUS system

    NASA Technical Reports Server (NTRS)

    Olariu, S.; Schwing, J.; Zhang, J.

    1991-01-01

    A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.

  6. Field comparison of three inhalable samplers (IOM, PGP-GSP 3.5 and Button) for welding fumes.

    PubMed

    Zugasti, Agurtzane; Montes, Natividad; Rojo, José M; Quintana, M José

    2012-02-01

    Inhalable sampler efficiency depends on the aerodynamic size of the airborne particles to be sampled and the wind speed. The aim of this study was to compare the behaviour of three personal inhalable samplers for welding fumes generated by Manual Metal Arc (MMA) and Metal Active Gas (MAG) processes. The selected samplers were the ones available in Spain when the study began: IOM, PGP-GSP 3.5 (GSP) and Button. Sampling was carried out in a welding training center that provided a homogeneous workplace environment. The static sampling assembly used allowed the placement of 12 samplers and 2 cascade impactors simultaneously. 183 samples were collected throughout 2009 and 2010. The range of welding fumes' mass concentrations was from 2 mg m(-3) to 5 mg m(-3). The pooled variation coefficients for the three inhalable samplers were less than or equal to 3.0%. Welding particle size distribution was characterized by a bimodal log-normal distribution, with MMADs of 0.7 μm and 8.2 μm. For these welding aerosols, the Button and the GSP samplers showed a similar performance (P = 0.598). The mean mass concentration ratio was 1.00 ± 0.01. The IOM sampler showed a different performance (P < 0.001). The mean mass concentration ratios were 0.90 ± 0.01 for Button/IOM and 0.92 ± 0.02 for GSP/IOM. This information is useful to consider the measurements accomplished by the IOM, GSP or Button samplers together, in order to assess the exposure at workplaces over time or to study exposure levels in a specific industrial activity, as welding operations.

  7. Time-dependence of ¹³⁷Cs activity concentration in wild game meat in Knyszyn Primeval Forest (Poland).

    PubMed

    Kapała, Jacek; Mnich, Krystian; Mnich, Stanisław; Karpińska, Maria; Bielawska, Agnieszka

    2015-03-01

    Wild game meat samples were analysed from the region of the Podlasie province (Knyszyn Primeval Forest). (137)Cs content in meat was determined by gamma spectrometry in 2003 (33 samples), 2009 (22 samples) and 2012 (26 samples). The samples were collected in the autumn of 2003, 2009 and 2012 and were compared with data from 1996. Mean concentrations of (137)Cs in the respective years were as follow: 42.2 Bq kg(-1), 33.7 Bq kg(-1) and 30.5 Bq kg(-1), respectively. On the basis of mean values of (137)Cs in the meat samples of red deer (Cervus elaphus), roe deer (Capreolus capreolus) and wild boars (Sus scrofa) between 1996/2012, the effective half-life of (137)Cs was determined for specific species. For red deer equaled 8.9 years, for roe deer 11.6 years while for wild boar it exceeded the physical half-life and equaled 38.5 years. Mean value CR obtained for all three species equaled 1.7 ± 1.5 out of 102 measurements in animals muscles. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Physical and chemical characteristics including total and geochemical forms of phosphorus in sediment from the top 30 centimeters of cores collected in October 2006 at 26 sites in Upper Klamath Lake, Oregon

    USGS Publications Warehouse

    Simon, Nancy S.; Ingle, Sarah N.

    2011-01-01

    μThis study of phosphorus (P) cycling in eutrophic Upper Klamath Lake (UKL), Oregon, was conducted by the U.S. Geological Survey in cooperation with the U.S. Bureau of Reclamation. Lakebed sediments from the upper 30 centimeters (cm) of cores collected from 26 sites were characterized. Cores were sampled at 0.5, 1.5, 2.5, 3.5, 4.5, 10, 15, 20, 25, and 30 cm. Prior to freezing, water content and sediment pH were determined. After being freeze-dried, all samples were separated into greater than 63-micron (μm) particle-size (coarse) and less than 63-μm particle-size (fine) fractions. In the surface samples (0.5 to 4.5 cm below the sediment water interface), approximately three-fourths of the particles were larger than 63-μm. The ratios of the coarse particle-size fraction (>63 μm) and the fine particle-size fraction (<63 μm) were approximately equal in samples at depths greater than 10 cm below the sediment water interface. Chemical analyses included both size fractions of freeze-dried samples. Chemical analyses included determination of total concentrations of aluminum (Al), calcium (Ca), carbon (C), iron (Fe), poorly crystalline Fe, nitrogen (N), P, and titanium (Ti). Total Fe concentrations were the largest in sediment from the northern portion of UKL, Howard Bay, and the southern portion of the lake. Concentrations of total Al, Ca, and Ti were largest in sediment from the northern, central, and southernmost portions of the lake and in sediment from Howard Bay. Concentrations of total C and N were largest in sediment from the embayments and in sediment from the northern arm and southern portion of the lake in the general region of Buck Island. Concentrations of total C were larger in the greater than 63-μm particle-size fraction than in the less than 63-μm particle-size fraction. Sediments were sequentially extracted to determine concentrations of inorganic forms of P, including loosely sorbed P, P associated with poorly crystalline Fe oxides, and P associated with mineral phases. The difference between the concentration of total P and sum of the concentrations of inorganic forms of P is referred to as residual P. Residual P was the largest fraction of P in all of the sediment samples. In UKL, the correlation between concentrations of total P and total Fe in sediment is poor (R2<0.1). The correlation between the concentrations of total P and P associated with poorly crystalline Fe oxides is good (R2=0.43) in surface sediment (0.5-4.5 cm below the sediment water interface) but poor (R2<0.1) in sediments at depths between 10 cm and 30 cm. Phosphorus associated with poorly crystalline Fe oxides is considered bioavailable because it is released when sediment conditions change from oxidizing to reducing, which causes dissolution of Fe oxides.

  9. Effect of Young's modulus on bubble formation and pressure waves during pulsed holmium ablation of tissue phantoms

    NASA Astrophysics Data System (ADS)

    Jansen, E. Duco; Asshauer, Thomas; Frenz, Martin; Delacretaz, Guy P.; Motamedi, Massoud; Welch, Ashley J.

    1995-05-01

    Mechanical injury during pulsed laser ablation of tissue is caused by rapid bubble expansions and collapse or by laser-induced pressure waves. In this study the effect of material elasticity on the ablation process has been investigated. Polyacrylamide tissue phantoms with various water concentrations (75-95%) were made. The Young's moduli of the gels were determined by measuring the stress-strain relationship. An optical fiber (200 or 400 micrometers ) was translated into the clear gel and one pulse of holmium:YAG laser radiation was given. The laser was operated in either the Q-switched mode (tau) p equals 500 ns, Qp equals 14 +/- 1 mJ, 200 micrometers fiber, Ho equals 446 mJ/mm2) or the free-running mode ((tau) p equals 100 microsecond(s) , Qp equals 200 +/- 5 mJ, 400 micrometers fiber, Ho equals 1592 mJ/mm2). Bubble formation inside the gels was recorded using a fast flash photography setup while simultaneously recording pressures with a PVDP needle hydrophone (40 ns risetime) positioned in the gel, approximately 2 mm away from the fibertip. A thermo-elastic expansion wave was measured only during Q-switched pulse delivery. The amplitude of this wave (approximately equals 40 bar at 1 mm from the fiber) did not vary significantly in any of the phantoms investigated. Rapid bubble formation and collapse was observed inside the clear gels. Upon bubble collapse, a pressure transient was emitted; the amplitude of this transient depended strongly on bubble size and geometry. It was found that (1) the bubble was almost spherical for the Q-switched pulse and became more elongated for the free-running pulse, and (2) the maximum bubble size and thus the collapse amplitude decreased with an increase in Young's modulus (from 68 +/- 11 bar at 1 mm in 95% water gel to 25 +/- 10 bar at 1 mm in 75% water gel).

  10. Analytic modeling of aerosol size distributions

    NASA Technical Reports Server (NTRS)

    Deepack, A.; Box, G. P.

    1979-01-01

    Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.

  11. Factors associated with success of image-guided tumour biopsies: Results from a prospective molecular triage study (MOSCATO-01).

    PubMed

    Tacher, Vania; Le Deley, Marie-Cécile; Hollebecque, Antoine; Deschamps, Frederic; Vielh, Philippe; Hakime, Antoine; Ileana, Ecaterina; Abedi-Ardekani, Behnoush; Charpy, Cécile; Massard, Christophe; Rosellini, Silvia; Gajda, Dorota; Celebic, Aljosa; Ferté, Charles; Ngo-Camus, Maud; Gouissem, Siham; Koubi-Pick, Valérie; Andre, Fabrice; Vassal, Gilles; Deandreis, Désirée; Lacroix, Ludovic; Soria, Jean-Charles; De Baère, Thierry

    2016-05-01

    MOSCATO-01 is a molecular triage trial based on on-purpose tumour biopsies to perform molecular portraits. We aimed at identifying factors associated with high tumour cellularity. Tumour cellularity (percentage of tumour cells in samples defined at pathology) was evaluated according to patient characteristics, target lesion characteristics, operators' experience and biopsy approach. Among 460 patients enrolled between November, 2011 and March, 2014, 334 patients (73%) had an image-guided needle biopsy of the primary tumour (N = 38) or a metastatic lesion (N = 296). Biopsies were performed on liver (N = 127), lung (N = 72), lymph nodes (N = 71), bone (N = 11), or another tumour site (N = 53). Eighteen patients (5%) experienced a complication: pneumothorax in 10 patients treated medically, and haemorrhage in 8, requiring embolisation in 3 cases. Median tumour cellularity was 50% (interquartile range, 30-70%). The molecular analysis was successful in 291/334 cases (87%). On-going chemotherapy, tumour origin (primary versus metastatic), lesion size, tumour growth rate, presence of necrosis on imaging, standardised uptake value, and needle size were not statistically associated with cellularity. Compared to liver or lung biopsies, cellularity was significantly lower in bone and higher in other sites (P < 0.0001). Cellularity significantly increased with the number of collected samples (P < 0.0001) and was higher in contrast-enhanced ultrasound-guided biopsies (P < 0.02). In paired samples, cellularity in central samples was lower than in peripheral samples in 85, equal in 68 and higher in 89 of the cases. Image-guided biopsy is feasible and safe in cancer patients for molecular screening. Imaging modality, multiple sampling of the lesion, and the organ chosen for biopsy were associated with higher tumour cellularity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Evaluation of the availability of bound analyte for passive sampling in the presence of mobile binding matrix.

    PubMed

    Xu, Jianqiao; Huang, Shuyao; Jiang, Ruifen; Cui, Shufen; Luan, Tiangang; Chen, Guosheng; Qiu, Junlang; Cao, Chenyang; Zhu, Fang; Ouyang, Gangfeng

    2016-04-21

    Elucidating the availability of the bound analytes for the mass transfer through the diffusion boundary layers (DBLs) adjacent to passive samplers is important for understanding the passive sampling kinetics in complex samples, in which the lability factor of the bound analyte in the DBL is an important parameter. In this study, the mathematical expression of lability factor was deduced by assuming a pseudo-steady state during passive sampling, and the equation indicated that the lability factor was equal to the ratio of normalized concentration gradients between the bound and free analytes. Through the introduction of the mathematical expression of lability factor, the modified effective average diffusion coefficient was proven to be more suitable for describing the passive sampling kinetics in the presence of mobile binding matrixes. Thereafter, the lability factors of the bound polycyclic aromatic hydrocarbons (PAHs) with sodium dodecylsulphate (SDS) micelles as the binding matrixes were figured out according to the improved theory. The lability factors were observed to decrease with larger binding ratios and smaller micelle sizes, and were successfully used to predict the mass transfer efficiencies of PAHs through DBLs. This study would promote the understanding of the availability of bound analytes for passive sampling based on the theoretical improvements and experimental assessments. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Facultative adjustment of the offspring sex ratio and male attractiveness: a systematic review and meta-analysis.

    PubMed

    Booksmythe, Isobel; Mautz, Brian; Davis, Jacqueline; Nakagawa, Shinichi; Jennions, Michael D

    2017-02-01

    Females can benefit from mate choice for male traits (e.g. sexual ornaments or body condition) that reliably signal the effect that mating will have on mean offspring fitness. These male-derived benefits can be due to material and/or genetic effects. The latter include an increase in the attractiveness, hence likely mating success, of sons. Females can potentially enhance any sex-biased benefits of mating with certain males by adjusting the offspring sex ratio depending on their mate's phenotype. One hypothesis is that females should produce mainly sons when mating with more attractive or higher quality males. Here we perform a meta-analysis of the empirical literature that has accumulated to test this hypothesis. The mean effect size was small (r = 0.064-0.095; i.e. explaining <1% of variation in offspring sex ratios) but statistically significant in the predicted direction. It was, however, not robust to correction for an apparent publication bias towards significantly positive results. We also examined the strength of the relationship using different indices of male attractiveness/quality that have been invoked by researchers (ornaments, behavioural displays, female preference scores, body condition, male age, body size, and whether a male is a within-pair or extra-pair mate). Only ornamentation and body size significantly predicted the proportion of sons produced. We obtained similar results regardless of whether we ran a standard random-effects meta-analysis, or a multi-level, Bayesian model that included a correction for phylogenetic non-independence. A moderate proportion of the variance in effect sizes (51.6-56.2%) was due to variation that was not attributable to sampling error (i.e. sample size). Much of this non-sampling error variance was not attributable to phylogenetic effects or high repeatability of effect sizes among species. It was approximately equally attributable to differences (occurring for unknown reasons) in effect sizes among and within studies (25.3, 22.9% of the total variance). There were no significant effects of year of publication or two aspects of study design (experimental/observational or field/laboratory) on reported effect sizes. We discuss various practical reasons and theoretical arguments as to why small effect sizes should be expected, and why there might be relatively high variation among studies. Currently, there are no species where replicated, experimental studies show that mothers adjust the offspring sex ratio in response to a generally preferred male phenotype. Ultimately, we need more experimental studies that test directly whether females produce more sons when mated to relatively more attractive males, and that provide the requisite evidence that their sons have higher mean fitness than their daughters. © 2015 Cambridge Philosophical Society.

  14. Iterative Frequency Domain Decision Feedback Equalization and Decoding for Underwater Acoustic Communications

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Ge, Jian-Hua

    2012-12-01

    Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.

  15. Persistence of Clostridium botulinum type C toxin in blow fly (Calliphoridae) larvae as a possible cause of avian botulism in spring.

    PubMed

    Hubálek, Z; Halouzka, J

    1991-01-01

    Diverse samples were examined at a site of water-bird mortality, caused by Clostridium botulinum type C toxin in southern Moravia (Czechoslovakia). The toxin was detected in high concentrations in mute swan (Cygnus olor) carcasses (less than or equal to 1 x 10(6) LD50/g) as well as in necrophagous larvae and pupae of the blow flies Lucilia sericata and Calliphora vomitoria (less than or equal to 1 x 10(5) LD50/g) collected from them. It was detected in lower concentrations (less than or equal to 1 x 10(3) LD50/g) in other invertebrates (ptychopterid fly larvae, leeches, sow-bugs) associated with these carcasses, and occasionally in water samples (8 LD50/ml) close to the carrion. The toxin was not detected in the samples of water, mud or invertebrates collected at a distance greater than or equal to 5 m from the carcasses. The toxin-bearing larvae of L. sericata and C. vomitoria, containing 80,000 LD50/g of type C toxin, were exposed in the mud at the study site for 131 days from November to March. Although the toxin activity decreased 25-fold and 40-fold in the two samples of maggots exposed during this period, it remained very high (less than or equal to 3,200 LD50/g). Birds ingesting a relatively low number of these toxic larvae (or pupae) in the spring could receive a lethal dose of the toxin.

  16. Equalizer tap length requirement for mode group delay-compensated fiber link with weakly random mode coupling.

    PubMed

    Bai, Neng; Li, Guifang

    2014-02-24

    The equalizer tap length requirement is investigated analytically and numerically for differential modal group delay (DMGD) compensated fiber link with weakly random mode coupling. Each span of the DMGD compensated link comprises multiple pairs of fibers which have opposite signs of DMGD. The result reveals that under weak random mode coupling, the required tap length of the equalizer is proportional to modal group delay of a single DMGD compensated pair, instead of the total modal group delay (MGD) of the entire link. By using small DMGD compensation step sizes, the required tap length (RTL) can be potentially reduced by 2 orders of magnitude.

  17. Do Personality Problems Improve During Psychodynamic Supportive-Expressive Psychotherapy? Secondary Outcome Results From a Randomized Controlled Trial for Psychiatric Outpatients with Personality Disorders

    PubMed Central

    Vinnars, Bo; Thormählen, Barbro; Gallop, Robert; Norén, Kristina; Barber, Jacques P.

    2009-01-01

    Studies involving patients with personality disorders (PD) have not focused on improvement of core aspects of the PD. This paper examines changes in quality of object relations, interpersonal problems, psychological mindedness, and personality traits in a sample of 156 patients with DSM-IV PD diagnoses being randomized to either manualized or non manualized dynamic psychotherapy. Effect sizes adjusted for symptomatic change and reliable change indices were calculated. We found that both treatments were equally effective at reducing personality pathology. Only in neuroticism did the non manualized group do better during the follow-up period. The largest improvement was found in quality of object relations. For the remaining variables only small and clinically insignificant magnitudes of change were found. PMID:20161588

  18. Development of a Charge-Implicit ReaxFF Potential for Hydrocarbon Systems.

    PubMed

    Kański, Michał; Maciążek, Dawid; Postawa, Zbigniew; Ashraf, Chowdhury M; van Duin, Adri C T; Garrison, Barbara J

    2018-01-18

    Molecular dynamics (MD) simulations continue to make important contributions to understanding chemical and physical processes. Concomitant with the growth of MD simulations is the need to have interaction potentials that both represent the chemistry of the system and are computationally efficient. We propose a modification to the ReaxFF potential for carbon and hydrogen that eliminates the time-consuming charge equilibration, eliminates the acknowledged flaws of the electronegativity equalization method, includes an expanded training set for condensed phases, has a repulsive wall for simulations of energetic particle bombardment, and is compatible with the LAMMPS code. This charge-implicit ReaxFF potential is five times faster than the conventional ReaxFF potential for a simulation of keV particle bombardment with a sample size of over 800 000 atoms.

  19. Zonal wavefront estimation using an array of hexagonal grating patterns

    NASA Astrophysics Data System (ADS)

    Pathak, Biswajit; Boruah, Bosanta R.

    2014-10-01

    Accuracy of Shack-Hartmann type wavefront sensors depends on the shape and layout of the lenslet array that samples the incoming wavefront. It has been shown that an array of gratings followed by a focusing lens provide a substitution for the lensslet array. Taking advantage of the computer generated holography technique, any arbitrary diffraction grating aperture shape, size or pattern can be designed with little penalty for complexity. In the present work, such a holographic technique is implemented to design regular hexagonal grating array to have zero dead space between grating patterns, eliminating the possibility of leakage of wavefront during the estimation of the wavefront. Tessellation of regular hexagonal shape, unlike other commonly used shapes, also reduces the estimation error by incorporating more number of neighboring slope values at an equal separation.

  20. Environmental heterogeneity, dispersal mode, and co-occurrence in stream macroinvertebrates

    PubMed Central

    Heino, Jani

    2013-01-01

    Both environmental heterogeneity and mode of dispersal may affect species co-occurrence in metacommunities. Aquatic invertebrates were sampled in 20–30 streams in each of three drainage basins, differing considerably in environmental heterogeneity. Each drainage basin was further divided into two equally sized sets of sites, again differing profoundly in environmental heterogeneity. Benthic invertebrate data were divided into three groups of taxa based on overland dispersal modes: passive dispersers with aquatic adults, passive dispersers with terrestrial winged adults, and active dispersers with terrestrial winged adults. The co-occurrence of taxa in each dispersal mode group, drainage basin, and heterogeneity site subset was measured using the C-score and its standardized effect size. The probability of finding high levels of species segregation tended to increase with environmental heterogeneity across the drainage basins. These patterns were, however, contingent on both dispersal mode and drainage basin. It thus appears that environmental heterogeneity and dispersal mode interact in affecting co-occurrence in metacommunities, with passive dispersers with aquatic adults showing random patterns irrespective of environmental heterogeneity, and active dispersers with terrestrial winged adults showing increasing segregation with increasing environmental heterogeneity. PMID:23467653

  1. Local order and crystallization of dense polydisperse hard spheres

    NASA Astrophysics Data System (ADS)

    Coslovich, Daniele; Ozawa, Misaki; Berthier, Ludovic

    2018-04-01

    Computer simulations give precious insight into the microscopic behavior of supercooled liquids and glasses, but their typical time scales are orders of magnitude shorter than the experimentally relevant ones. We recently closed this gap for a class of models of size polydisperse fluids, which we successfully equilibrate beyond laboratory time scales by means of the swap Monte Carlo algorithm. In this contribution, we study the interplay between compositional and geometric local orders in a model of polydisperse hard spheres equilibrated with this algorithm. Local compositional order has a weak state dependence, while local geometric order associated to icosahedral arrangements grows more markedly but only at very high density. We quantify the correlation lengths and the degree of sphericity associated to icosahedral structures and compare these results to those for the Wahnström Lennard-Jones mixture. Finally, we analyze the structure of very dense samples that partially crystallized following a pattern incompatible with conventional fractionation scenarios. The crystal structure has the symmetry of aluminum diboride and involves a subset of small and large particles with size ratio approximately equal to 0.5.

  2. Mermin-Wagner theorem, flexural modes, and degraded carrier mobility in two-dimensional crystals with broken horizontal mirror symmetry

    NASA Astrophysics Data System (ADS)

    Fischetti, Massimo V.; Vandenberghe, William G.

    2016-04-01

    We show that the electron mobility in ideal, free-standing two-dimensional "buckled" crystals with broken horizontal mirror (σh) symmetry and Dirac-like dispersion (such as silicene and germanene) is dramatically affected by scattering with the acoustic flexural modes (ZA phonons). This is caused both by the broken σh symmetry and by the diverging number of long-wavelength ZA phonons, consistent with the Mermin-Wagner theorem. Non-σh-symmetric, "gapped" 2D crystals (such as semiconducting transition-metal dichalcogenides with a tetragonal crystal structure) are affected less severely by the broken σh symmetry, but equally seriously by the large population of the acoustic flexural modes. We speculate that reasonable long-wavelength cutoffs needed to stabilize the structure (finite sample size, grain size, wrinkles, defects) or the anharmonic coupling between flexural and in-plane acoustic modes (shown to be effective in mirror-symmetric crystals, like free-standing graphene) may not be sufficient to raise the electron mobility to satisfactory values. Additional effects (such as clamping and phonon stiffening by the substrate and/or gate insulator) may be required.

  3. Effects of spoil texture on growth of K-31 tall fescue

    Treesearch

    David H. Van Lear

    1971-01-01

    Growth of K-31 tall fescue (Festuca arundinacea) was significantly affected by the particle-size distribution, or texture, of four spoils from eastern Kentucky. Growth on spoils having no toxic chemical properties generally was greatest where texture consisted of about equal quantities of soil-size material and a coarser fraction (2 mm. to 6.4 mm.),...

  4. 46 CFR 34.15-5 - Quantity, pipe sizes, and discharge rates-T/ALL.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Carbon Dioxide Extinguishing Systems, Details § 34.15-5 Quantity, pipe sizes, and discharge rates—T/ALL. (a) General. (1) The amount of carbon dioxide required for each space shall be as determined by... carbon dioxide required for each space shall be equal to the gross volume of the space in cubic feet...

  5. 46 CFR 34.15-5 - Quantity, pipe sizes, and discharge rates-T/ALL.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Carbon Dioxide Extinguishing Systems, Details § 34.15-5 Quantity, pipe sizes, and discharge rates—T/ALL. (a) General. (1) The amount of carbon dioxide required for each space shall be as determined by... carbon dioxide required for each space shall be equal to the gross volume of the space in cubic feet...

  6. 46 CFR 34.15-5 - Quantity, pipe sizes, and discharge rates-T/ALL.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Carbon Dioxide Extinguishing Systems, Details § 34.15-5 Quantity, pipe sizes, and discharge rates—T/ALL. (a) General. (1) The amount of carbon dioxide required for each space shall be as determined by... carbon dioxide required for each space shall be equal to the gross volume of the space in cubic feet...

  7. 77 FR 10724 - Western Pacific Pelagic Fisheries; American Samoa Longline Limited Entry Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-23

    ... size class falls below the maximum allowed. Six permits are available, as follows: Four in Class A (vessels less than or equal to 40 ft in overall length); and Two in Class D (over 70 ft in overall length... the highest priority to the applicant (for any vessel size class) with the earliest documented...

  8. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  9. Results of Characterization and Retrieval Testing on Tank 241-C-109 Heel Solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callaway, William S.

    Eight samples of heel solids from tank 241-C-109 were delivered to the 222-S Laboratory for characterization and dissolution testing. After being drained thoroughly, one-half to two-thirds of the solids were off-white to tan solids that, visually, were fairly evenly graded in size from coarse silt (30-60 μm) to medium pebbles (8-16 mm). The remaining solids were mostly strongly cemented aggregates ranging from coarse pebbles (16-32 mm) to fine cobbles (6-15 cm) in size. Solid phase characterization and chemical analysis indicated that the air-dry heel solids contained ≈58 wt% gibbsite [Al(OH){sub 3}] and ≈37 wt% natrophosphate [Na{sub 7}F(PO{sub 4}){sub 2}·19H{sub 2}O].more » The strongly cemented aggregates were mostly fine-grained gibbsite cemented with additional gibbsite. Dissolution testing was performed on two test samples. One set of tests was performed on large pieces of aggregate solids removed from the heel solids samples. The other set of dissolution tests was performed on a composite sample prepared from well-drained, air-dry heel solids that were crushed to pass a 1/4-in. sieve. The bulk density of the composite sample was 2.04 g/mL. The dissolution tests included water dissolution followed by caustic dissolution testing. In each step of the three-step water dissolution tests, a volume of water approximately equal to 3 times the initial volume of the test solids was added. In each step, the test samples were gently but thoroughly mixed for approximately 2 days at an average ambient temperature of 25 °C. The caustic dissolution tests began with the addition of sufficient 49.6 wt% NaOH to the water dissolution residues to provide ≈3.1 moles of OH for each mole of Al estimated to have been present in the starting composite sample and ≈2.6 moles of OH for each mole of Al potentially present in the starting aggregate sample. Metathesis of gibbsite to sodium aluminate was then allowed to proceed over 10 days of gentle mixing of the test samples at temperatures ranging from 26-30 °C. The metathesized sodium aluminate was then dissolved by addition of volumes of water approximately equal to 1.3 times the volumes of caustic added to the test slurries. Aluminate dissolution was allowed to proceed for 2 days at ambient temperatures of ≈29 °C. Overall, the sequential water and caustic dissolution tests dissolved and removed 80.0 wt% of the tank 241-C-109 crushed heel solids composite test sample. The 20 wt% of solids remaining after the dissolution tests were 85-88 wt% gibbsite. If the density of the residual solids was approximately equal to that of gibbsite, they represented ≈17 vol% of the initial crushed solids composite test sample. In the water dissolution tests, addition of a volume of water ≈6.9 times the initial volume of the crushed solids composite was sufficient to dissolve and recover essentially all of the natrophosphate present. The ratio of the weight of water required to dissolve the natrophosphate solids to the estimated weight of natrophosphate present was 8.51. The Environmental Simulation Program (OLI Systems, Inc., Morris Plains, New Jersey) predicts that an 8.36 w/w ratio would be required to dissolve the estimated weight of natrophosphate present in the absence of other components of the heel solids. Only minor amounts of Al-bearing solids were removed from the composite solids in the water dissolution tests. The caustic metathesis/aluminate dissolution test sequence, executed at temperatures ranging from 27-30 °C, dissolved and recovered ≈69 wt% of the gibbsite estimated to have been present in the initial crushed heel solids composite. This level of gibbsite recovery is consistent with that measured in previous scoping tests on the dissolution of gibbsite in strong caustic solutions. Overall, the sequential water and caustic dissolution tests dissolved and removed 80.3 wt% of the tank 241-C-109 aggregate solids test sample. The residual solids were 92-95 wt% gibbsite. Only a minor portion (≈4.5 wt%) of the aggregate solids was dissolved and recovered in the water dissolution test. Other than some smoothing caused by continuous mixing, the aggregates were essentially unaffected by the water dissolution tests. During the caustic metathesis/aluminate dissolution test sequence, ≈81 wt% of the gibbsite estimated to have been present in the aggregate solids was dissolved and recovered. The pieces of aggregate were significantly reduced in size but persisted as distinct pieces of solids. The increased level of gibbsite recovery, as compared to that for the crushed heel solids composite, suggests that the way the gibbsite solids and caustic solution are mixed is a key determinant of the overall efficiency of gibbsite dissolution and recovery. The liquids recovered after the caustic dissolution tests on the crushed solids composite and the aggregate solids were observed for 170 days. No precipitation of gibbsite was observed. The distribution of particle sizes in the residual solids recovered following the dissolution tests on the crushed heel solids composite was characterized. Wet sieving indicated that 21.4 wt% of the residual solids were >710 μm in size, and laser light scattering indicated that the median equivalent spherical diameter in the <710-μm solids was 35 μm. The settling behavior of the residual solids following the large-scale dissolution tests was also studied. When dispersed at a concentration of ≈1 vol% in water, ≈24 wt% of the residual solids settled at a rate >0.43 in./s; ≈68 wt% settled at rates between 0.02 and 0.43 in./s; and ≈7 wt% settled slower than 0.02 in./s.« less

  10. Protein Crystals Grow Purer in Space: Physics of Phenomena

    NASA Technical Reports Server (NTRS)

    Chernov, Alex A.

    2000-01-01

    This presentation will summarize the quantitative experimental and theoretical results obtained by B.R. Thomas, P.G. Vekilov, D.C. Carter, A.M. Holmes, W.K. Widierow and the Author, the team with expertise in physics, biochemistry, crystallography and engineering. Impurities inhomogeneously trapped by a growing crystal - e.g., producing sectorial structure and/or striations - may induce macroscopic internal stress in it if an impurity molecule has slightly (less than 10%) different shape or volume than the regular one(s) they replace. We tested for the first time plasticity and measured Young modulus E of the triclinic, not cross-linked lysozyme by triple point bending technique. Triclinic lysozyme crystals are purely elastic with E similar or equal to 1/5 (raised dot) 10 (exp 9) partial derivative yn/sq cm. The strength limit, sigma (sub c) similar or equal to 10 (exp -3)E similar or equal to Epsilon (sub c), where sigma (sub c) and epsilon (sub c) are critical stress and strain, respectively. Scaling E and sigma (sub c) with the lattice spacing suggests similar binding stiffness in inorganic and biomolecular crystals. The inhomogeneous internal stress may be resolved in these brittle crystals either by cracking or by creation of misoriented mosaic blocks during, not after growth. If each impurity molecule induces in the lattice elementary strain epsilon (sub 0) similar or equal to 3 (raised dot) 10 (exp -2) (this is maximal elementary strain that can arise at the supersaturation DELTA mu/kT similar or equal to 2 and macroscopic molecular concentration difference between subsequent macrolayers or growth sectors is partial derivativeC similar or equal to 5 (raised dot) 10 (exp -3), the internal strain epsilon similar or equal to epsilon (sub 0) partial derivative C similar or equal to 10 (exp -4). Mosaic misorientation resolving such strain is approximately 30 arcsec. Tenfold increase of impurity concentration may cause cracking. Estimates of stress in an isometric sectorial crystal show that lysozyme crystals can tolerate the stress till the size of 0.5mm. Dissolving mosaic lysozyme crystal shows that the mosaicity, indeed, is absent below that size.

  11. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    PubMed

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  12. Parallel-Processing Equalizers for Multi-Gbps Communications

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Ghuman, Parminder; Hoy, Scott; Satorius, Edgar H.

    2004-01-01

    Architectures have been proposed for the design of frequency-domain least-mean-square complex equalizers that would be integral parts of parallel- processing digital receivers of multi-gigahertz radio signals and other quadrature-phase-shift-keying (QPSK) or 16-quadrature-amplitude-modulation (16-QAM) of data signals at rates of multiple gigabits per second. Equalizers as used here denotes receiver subsystems that compensate for distortions in the phase and frequency responses of the broad-band radio-frequency channels typically used to convey such signals. The proposed architectures are suitable for realization in very-large-scale integrated (VLSI) circuitry and, in particular, complementary metal oxide semiconductor (CMOS) application- specific integrated circuits (ASICs) operating at frequencies lower than modulation symbol rates. A digital receiver of the type to which the proposed architecture applies (see Figure 1) would include an analog-to-digital converter (A/D) operating at a rate, fs, of 4 samples per symbol period. To obtain the high speed necessary for sampling, the A/D and a 1:16 demultiplexer immediately following it would be constructed as GaAs integrated circuits. The parallel-processing circuitry downstream of the demultiplexer, including a demodulator followed by an equalizer, would operate at a rate of only fs/16 (in other words, at 1/4 of the symbol rate). The output from the equalizer would be four parallel streams of in-phase (I) and quadrature (Q) samples.

  13. Recent Developments in Transition-Edge Strip Detectors for Solar X-Rays

    NASA Technical Reports Server (NTRS)

    Rausch, Adam J.; Deiker, Steven W.; Hilton, Gene; Irwin, Kent D.; Martinez-Galarce, Dennis S.; Shing, Lawrence; Stern, Robert A.; Ullom, Joel N.; Vale, Leila R.

    2008-01-01

    LMSAL and NIST are developing position-sensitive x-ray strip detectors based on Transition Edge Sensor (TES) microcalorimeters optimized for solar physics. By combining high spectral (E/ delta E approximately equals 1600) and temporal (single photon delta t approximately equals 10 micro s) resolutions with imaging capabilities, these devices will be able to study high-temperature (>l0 MK) x-ray lines as never before. Diagnostics from these lines should provide significant new insight into the physics of both microflares and the early stages of flares. Previously, the large size of traditional TESs, along with the heat loads associated with wiring large arrays, presented obstacles to using these cryogenic detectors for solar missions. Implementing strip detector technology at small scales, however, addresses both issues: here, a line of substantially smaller effective pixels requires only two TESs, decreasing both the total array size and the wiring requirements for the same spatial resolution. Early results show energy resolutions of delta E(sub fwhm) approximately equals 30 eV and spatial resolutions of approximately 10-15 micron, suggesting the strip-detector concept is viable.

  14. Measurement of carbon nanotube microstructure relative density by optical attenuation and observation of size-dependent variations.

    PubMed

    Park, Sei Jin; Schmidt, Aaron J; Bedewy, Mostafa; Hart, A John

    2013-07-21

    Engineering the density of carbon nanotube (CNT) forest microstructures is vital to applications such as electrical interconnects, micro-contact probes, and thermal interface materials. For CNT forests on centimeter-scale substrates, weight and volume can be used to calculate density. However, this is not suitable for smaller samples, including individual microstructures, and moreover does not enable mapping of spatial density variations within the forest. We demonstrate that the relative mass density of individual CNT microstructures can be measured by optical attenuation, with spatial resolution equaling the size of the focused spot. For this, a custom optical setup was built to measure the transmission of a focused laser beam through CNT microstructures. The transmittance was correlated with the thickness of the CNT microstructures by Beer-Lambert-Bouguer law to calculate the attenuation coefficient. We reveal that the density of CNT microstructures grown by CVD can depend on their size, and that the overall density of arrays of microstructures is affected significantly by run-to-run process variations. Further, we use the technique to quantify the change in CNT microstructure density due to capillary densification. This is a useful and accessible metrology technique for CNTs in future microfabrication processes, and will enable direct correlation of density to important properties such as stiffness and electrical conductivity.

  15. Use of the superpopulation approach to estimate breeding population size: An example in asynchronously breeding birds

    USGS Publications Warehouse

    Williams, K.A.; Frederick, P.C.; Nichols, J.D.

    2011-01-01

    Many populations of animals are fluid in both space and time, making estimation of numbers difficult. Much attention has been devoted to estimation of bias in detection of animals that are present at the time of survey. However, an equally important problem is estimation of population size when all animals are not present on all survey occasions. Here, we showcase use of the superpopulation approach to capture-recapture modeling for estimating populations where group membership is asynchronous, and where considerable overlap in group membership among sampling occasions may occur. We estimate total population size of long-legged wading bird (Great Egret and White Ibis) breeding colonies from aerial observations of individually identifiable nests at various times in the nesting season. Initiation and termination of nests were analogous to entry and departure from a population. Estimates using the superpopulation approach were 47-382% larger than peak aerial counts of the same colonies. Our results indicate that the use of the superpopulation approach to model nesting asynchrony provides a considerably less biased and more efficient estimate of nesting activity than traditional methods. We suggest that this approach may also be used to derive population estimates in a variety of situations where group membership is fluid. ?? 2011 by the Ecological Society of America.

  16. Particle interaction of lubricated or unlubricated binary mixtures according to their particle size and densification mechanism.

    PubMed

    Di Martino, Piera; Joiris, Etienne; Martelli, Sante

    2004-09-01

    The aim of this study is to assess an experimental approach for technological development of a direct compression formulation. A simple formula was considered composed by an active ingredient, a diluent and a lubricant. The active ingredient and diluent were selected as an example according to their typical densification mechanism: the nitrofurantoine, a fragmenting material, and the cellulose microcrystalline (Vivapur), which is a typical visco-elastic material, equally displaying good bind and disintegrant properties. For each ingredient, samples of different particle size distribution were selected. Initially, tabletability of pure materials was studied by a rotary press without magnesium stearate. Vivapur tabletability decreases with increase in particle size. The addition of magnesium stearate as lubricant decreases tabletability of Vivapur of greater particle size, while it kept unmodified that of Vivapur of lower particle size. Differences in tabletability can be related to differences in particle-particle interactions; for Vivapur of higher particle size (Vivapur 200, 102 and 101), the lower surface area develops lower surface available for bonds, while for Vivapur of lower particle size (99 and 105) the greater surface area allows high particle proximity favouring particle cohesivity. Nitrofurantoine shows great differences in compression behaviour according to its particle size distribution. Large crystals show poorer tabletability than fine crystals, further decreased by lubricant addition. The large crystals poor tabletability is due to their poor compactibility, in spite of high compressibility and plastic intrinsic deformability; in fact, in spite of the high densification tendency, the nature of the involved bonds is very weak. Nitrofurantoine samples were then mixed with Vivapurs in different proportions. Compression behaviour of binary mixes (tabletability and compressibility) was then evaluated according to diluents proportion in the mixes. The mix of either nitrofurantoine large crystals or fine crystals with cellulose microcrystalline showed a negative interaction in all proportions, whatever particle sizes. The lubricant addition induced a positive interaction with Vivapur of greater particle size distribution (200, 102 and 101) favouring higher particle adhesivity, while it maintained unaltered that of Vivapurs of lower particle size (105 and 99). Definitely, when cohesive forces are predominant (Vivapur 105 and 99), the establishment of adhesive bonds between nitrofurantoine and Vivapur remain unnoticed; on the contrary, when cohesion bonds between microcrystalline cellulose particles are weakened by the presence of magnesium stearate, the existence of adhesion bonds between particles of different nature is in evidence, leading to a positive interaction.

  17. Is Managed Care Leading to Consolidation in Health-care Markets?

    PubMed Central

    David, Dranove; Simon, Carol J; White, William D

    2002-01-01

    Objective To determine the extent to which managed care has led to consolidation among hospitals and physicians. Data Sources We use data from the American Hospital Association, American Medical Association, and government censuses. Study Design Two stage least squares regression analysis examines how cross-section variation in managed care penetration affects provider consolidation, while controlling for the endogeneity of managed-care penetration. Specifically, we examine inpatient hospital markets and physician practice size in large metropolitan areas. Data Collection Methods All data are from secondary sources, merged at the level of the Primary Metropolitan Statistical Area. Principal Findings We find that higher levels of local managed-care penetration are associated with substantial increases in consolidation in hospital and physician markets. In the average market (managed-care penetration equaled 34 percent in 1994), managed care was associated with an increase in the Herfindahl of .054 between 1981 and 1994, moving from .096 in 1981 to .154. This is equivalent to moving from 10.4 equal-size hospitals to 6.5 equal-sized hospitals. In the physician market place, we estimate that at the mean, managed care resulted in a 14 percentage point decrease of physicians in solo practice between 1986 and 1995. This implies a decrease in the percentage of doctors in solo practice from 38 percent in 1986 to 24 percent by 1995. PMID:12132596

  18. Moments of catchment storm area

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Wang, Q.

    1985-01-01

    The portion of a catchment covered by a stationary rainstorm is modeled by the common area of two overlapping circles. Given that rain occurs within the catchment and conditioned by fixed storm and catchment sizes, the first two moments of the distribution of the common area are derived from purely geometrical considerations. The variance of the wetted fraction is shown to peak when the catchment size is equal to the size of the predominant storm. The conditioning on storm size is removed by assuming a probability distribution based upon the observed fractal behavior of cloud and rainstorm areas.

  19. Effects of Shapes of Solute Molecules on Diffusion: A Study of Dependences on Solute Size, Solvent, and Temperature.

    PubMed

    Chan, T C; Li, H T; Li, K Y

    2015-12-24

    Diffusivities of basically linear, planar, and spherical solutes at infinite dilution in various solvents are studied to unravel the effects of solute shapes on diffusion. On the basis of the relationship between the reciprocal of diffusivity and the molecular volume of solute molecules with similar shape in a given solvent at constant temperature, the diffusivities of solutes of equal molecular volume but different shapes are evaluated and the effects due to different shapes of two equal-sized solute molecules on diffusion are determined. It is found that the effects are dependent on the size of the solute pairs studied. Evidence of the dependence of the solute-shape effects on solvent properties is also demonstrated and discussed. Here, some new diffusion data of aromatic compounds in methanol at different temperatures are reported. The result for methanol in this study indicates that the effects of solute shape on diffusivity are only weakly dependent on temperature.

  20. Surface composition of alloys

    NASA Astrophysics Data System (ADS)

    Sachtler, W. M. H.

    1984-11-01

    In equilibrium, the composition of the surface of an alloy will, in general, differ from that of the bulk. The broken-bond model is applicable to alloys with atoms of virtually equal size. If the heat of alloy formation is zero, the component of lower heat of atomization is found enriched in the surface. If both partners have equal heats of sublimination, the surface of a diluted alloy is enriched with the minority component. Size effects can enhance or weaken the electronic effects. In general, lattice strain can be relaxed by precipitating atoms of deviating size on the surface. Two-phase alloys are described by the "cherry model", i.e. one alloy phase, the "kernel" is surrounded by another alloy, the "flesh", and the surface of the outer phase, the "skin" displays a deviating surface composition as in monophasic alloys. In the presence of molecules capable of forming chemical bonds with individual metal atoms, "chemisorption induced surface segregation" can be observed at low temperatures, i.e. the surface becomes enriched with the metal forming the stronger chemisorption bonds.

  1. Light scattering by lunar-like particle size distributions

    NASA Technical Reports Server (NTRS)

    Goguen, Jay D.

    1991-01-01

    A fundamental input to models of light scattering from planetary regoliths is the mean phase function of the regolith particles. Using the known size distribution for typical lunar soils, the mean phase function and mean linear polarization for a regolith volume element of spherical particles of any composition were calculated from Mie theory. The two contour plots given here summarize the changes in the mean phase function and linear polarization with changes in the real part of the complex index of refraction, n - ik, for k equals 0.01, the visible wavelength 0.55 micrometers, and the particle size distribution of the typical mature lunar soil 72141. A second figure is a similar index-phase surface, except with k equals 0.1. The index-phase surfaces from this survey are a first order description of scattering by lunar-like regoliths of spherical particles of arbitrary composition. They form the basis of functions that span a large range of parameter-space.

  2. The 4-parameter Compressible Packing Model (CPM) including a critical cavity size ratio

    NASA Astrophysics Data System (ADS)

    Roquier, Gerard

    2017-06-01

    The 4-parameter Compressible Packing Model (CPM) has been developed to predict the packing density of mixtures constituted by bidisperse spherical particles. The four parameters are: the wall effect and the loosening effect coefficients, the compaction index and a critical cavity size ratio. The two geometrical interactions have been studied theoretically on the basis of a spherical cell centered on a secondary class bead. For the loosening effect, a critical cavity size ratio, below which a fine particle can be inserted into a small cavity created by touching coarser particles, is introduced. This is the only parameter which requires adaptation to extend the model to other types of particles. The 4-parameter CPM demonstrates its efficiency on frictionless glass beads (300 values), spherical particles numerically simulated (20 values), round natural particles (125 values) and crushed particles (335 values) with correlation coefficients equal to respectively 99.0%, 98.7%, 97.8%, 96.4% and mean deviations equal to respectively 0.007, 0.006, 0.007, 0.010.

  3. [Effective size of the early-run sockeye salmon Oncorhynchus nerka population of Lake Azabach'e, Kamchatka Peninsula evaluation of the effect of interaction between subpopulations within a subdivided population].

    PubMed

    Efremov, V V

    2005-05-01

    The effect of subdivision on the effective size (Ne) of the early-run sockeye salmon Oncorhynchus nerka population of Lake Azabach'e (Kamchatka Peninsula) has been studied. The mode of this effect is determined by the relative productivity of the subpopulations and its magnitude, by the rate of individual migration among subpopulations and genetic differentiation. If the contributions of subpopulations (offspring numbers) are different, genetic differentiation can reduce the Ne of the subdivided population. At equal subpopulation contributions, genetic differentiation always increases the Ne of the subdivided population in comparison with a panmictic population. We have found that all sockeye salmon subpopulations of Azabach'e Lake produce equal offspring numbers contributing to the next generation. The genetic differentiation between sockeye salmon subpopulations is low, and the subdivision increases the Ne of the early-run race with reference to the sum of the effective sizes of the subpopulations by as little as 2%.

  4. Cytotoxicity of ZnO Nanoparticles Can Be Tailored by Modifying Their Surface Structure: A Green Chemistry Approach for Safer Nanomaterials.

    PubMed

    Punnoose, Alex; Dodge, Kelsey; Rasmussen, John W; Chess, Jordan; Wingett, Denise; Anders, Catherine

    2014-07-07

    ZnO nanoparticles (NP) are extensively used in numerous nanotechnology applications; however, they also happen to be one of the most toxic nanomaterials. This raises significant environmental and health concerns and calls for the need to develop new synthetic approaches to produce safer ZnO NP, while preserving their attractive optical, electronic, and structural properties. In this work, we demonstrate that the cytotoxicity of ZnO NP can be tailored by modifying their surface-bound chemical groups, while maintaining the core ZnO structure and related properties. Two equally sized (9.26 ± 0.11 nm) ZnO NP samples were synthesized from the same zinc acetate precursor using a forced hydrolysis process, and their surface chemical structures were modified by using different reaction solvents. X-ray diffraction and optical studies showed that the lattice parameters, optical properties, and band gap (3.44 eV) of the two ZnO NP samples were similar. However, FTIR spectroscopy showed significant differences in the surface structures and surface-bound chemical groups. This led to major differences in the zeta potential, hydrodynamic size, photocatalytic rate constant, and more importantly, their cytotoxic effects on Hut-78 cancer cells. The ZnO NP sample with the higher zeta potential and catalytic activity displayed a 1.5-fold stronger cytotoxic effect on cancer cells. These results suggest that by modifying the synthesis parameters/conditions and the surface chemical structures of the nanocrystals, their surface charge density, catalytic activity, and cytotoxicity can be tailored. This provides a green chemistry approach to produce safer ZnO NP.

  5. Parametric modelling of cost data in medical studies.

    PubMed

    Nixon, R M; Thompson, S G

    2004-04-30

    The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.

  6. Major Geomagnetic Storms (Dst less than or equal to -100 nT) Generated by Corotating Interaction Regions

    NASA Technical Reports Server (NTRS)

    Richardson, I. G.; Webb, D. F.; Zhang, J.; Berdichevsky, B. D.; Biesecker, D. A.; Kasper, J. C.; Kataoka, R.; Steinberg, J. T.; Thompson, B. J.; Wu, C.-C.; hide

    2006-01-01

    Seventy-nine major geomagnetic storms (minimum Dst less than or equal to -100 nT) observed in 1996 to 2004 were the focus of a Living with a Star Coordinated Data-Analysis Workshop (CDAW) in March, 2005. In 9 cases, the storm driver appears to have been purely a corotating interaction region (CIR) without any contribution from coronal mass ejection-related material (interplanetary coronal mass ejections, ICMEs). These storms were generated by structures within CIRs located both before and/or after the stream interface that included persistently southward magnetic fields for intervals of several hours. We compare their geomagnetic effects with those of 159 CIRs observed during 1996 - 2005. The major storms form the extreme tail of a continuous distribution of CIR geoeffectiveness which peaks at Dst approx. -40 nT but is subject to a prominent seasonal variation of - 40 nT which is ordered by the spring and fall equinoxes and the solar wind magnetic field direction towards or away from the Sun. The O'Brien and McPherron [2000] equations, which estimate Dst by integrating the incident solar wind electric field and incorporating a ring current loss term, largely account for the variation in storm size. They tend to underestimate the size of the larger CIR-associated storms by Dst approx. 20 nT. This suggests that injection into the ring current may be more efficient than expected in such storms. Four of the nine major storms in 1996 - 2004 occurred during a period of less than three solar rotations in September - November, 2002, also the time of maximum mean IMF and solar magnetic field intensity during the current solar cycle. The maximum CIR-storm strength found in our sample of events, plus additional 23 probable CIR-associated Dst less than or equal to -100 nT storms in 1972 - 1995, is (Dst = -161 nT). This is consistent with the maximum storm strength (Dst approx. -180 nT) expected from the O'Brien and McPherron equations for the typical range of solar wind electric fields associated with CIRs. This suggests that CIRs alone are unlikely to generate geomagnetic storms that exceed these levels.

  7. Ideal-observer detectability in photon-counting differential phase-contrast imaging using a linear-systems approach

    PubMed Central

    Fredenberg, Erik; Danielsson, Mats; Stayman, J. Webster; Siewerdsen, Jeffrey H.; Åslund, Magnus

    2012-01-01

    Purpose: To provide a cascaded-systems framework based on the noise-power spectrum (NPS), modulation transfer function (MTF), and noise-equivalent number of quanta (NEQ) for quantitative evaluation of differential phase-contrast imaging (Talbot interferometry) in relation to conventional absorption contrast under equal-dose, equal-geometry, and, to some extent, equal-photon-economy constraints. The focus is a geometry for photon-counting mammography. Methods: Phase-contrast imaging is a promising technology that may emerge as an alternative or adjunct to conventional absorption contrast. In particular, phase contrast may increase the signal-difference-to-noise ratio compared to absorption contrast because the difference in phase shift between soft-tissue structures is often substantially larger than the absorption difference. We have developed a comprehensive cascaded-systems framework to investigate Talbot interferometry, which is a technique for differential phase-contrast imaging. Analytical expressions for the MTF and NPS were derived to calculate the NEQ and a task-specific ideal-observer detectability index under assumptions of linearity and shift invariance. Talbot interferometry was compared to absorption contrast at equal dose, and using either a plane wave or a spherical wave in a conceivable mammography geometry. The impact of source size and spectrum bandwidth was included in the framework, and the trade-off with photon economy was investigated in some detail. Wave-propagation simulations were used to verify the analytical expressions and to generate example images. Results: Talbot interferometry inherently detects the differential of the phase, which led to a maximum in NEQ at high spatial frequencies, whereas the absorption-contrast NEQ decreased monotonically with frequency. Further, phase contrast detects differences in density rather than atomic number, and the optimal imaging energy was found to be a factor of 1.7 higher than for absorption contrast. Talbot interferometry with a plane wave increased detectability for 0.1-mm tumor and glandular structures by a factor of 3–4 at equal dose, whereas absorption contrast was the preferred method for structures larger than ∼0.5 mm. Microcalcifications are small, but differ from soft tissue in atomic number more than density, which is favored by absorption contrast, and Talbot interferometry was barely beneficial at all within the resolution limit of the system. Further, Talbot interferometry favored detection of “sharp” as opposed to “smooth” structures, and discrimination tasks by about 50% compared to detection tasks. The technique was relatively insensitive to spectrum bandwidth, whereas the projected source size was more important. If equal photon economy was added as a restriction, phase-contrast efficiency was reduced so that the benefit for detection tasks almost vanished compared to absorption contrast, but discrimination tasks were still improved close to a factor of 2 at the resolution limit. Conclusions: Cascaded-systems analysis enables comprehensive and intuitive evaluation of phase-contrast efficiency in relation to absorption contrast under requirements of equal dose, equal geometry, and equal photon economy. The benefit of Talbot interferometry was highly dependent on task, in particular detection versus discrimination tasks, and target size, shape, and material. Requiring equal photon economy weakened the benefit of Talbot interferometry in mammography. PMID:22957600

  8. Professional hazards? The impact of models' body size on advertising effectiveness and women's body-focused anxiety in professions that do and do not emphasize the cultural ideal of thinness.

    PubMed

    Dittmar, Helga; Howard, Sarah

    2004-12-01

    Previous experimental research indicates that the use of average-size women models in advertising prevents the well-documented negative effect of thin models on women's body image, while such adverts are perceived as equally effective (Halliwell & Dittmar, 2004). The current study extends this work by: (a) seeking to replicate the finding of no difference in advertising effectiveness between average-size and thin models (b) examining level of ideal-body internalization as an individual, internal factor that moderates women's vulnerability to thin media models, in the context of (c) comparing women in professions that differ radically in their focus on, and promotion of, the sociocultural ideal of thinness for women--employees in fashion advertising (n = 75) and teachers in secondary schools (n = 75). Adverts showing thin, average-size and no models were perceived as equally effective. High internalizers in both groups of women felt worse about their body image after exposure to thin models compared to other images. Profession affected responses to average-size models. Teachers reported significantly less body-focused anxiety after seeing average-size models compared to no models, while there was no difference for fashion advertisers. This suggests that women in professional environments with less focus on appearance-related ideals can experience increased body-esteem when exposed to average-size models, whereas women in appearance-focused professions report no such relief.

  9. One session treatment for specific phobias in children: Comorbid anxiety disorders and treatment outcome.

    PubMed

    Ryan, Sarah M; Strege, Marlene V; Oar, Ella L; Ollendick, Thomas H

    2017-03-01

    One-Session Treatment (OST) for specific phobias has been shown to be effective in reducing phobia severity; however, the effect of different types of co-occurring anxiety disorders on OST outcomes is unknown. The present study examined (1) the effects of co-occurring generalized anxiety disorder (GAD), social anxiety disorder (SAD), or another non-targeted specific phobia (OSP) on the efficacy of OST for specific phobias, and (2) the effects of OST on these co-occurring disorders following treatment. Three groups of 18 youth (7-15 years) with a specific phobia and comorbid GAD, SAD, or OSP were matched on age, gender, and phobia type. Outcome measures included diagnostic status and severity, and clinician rated improvement. All groups demonstrated an improvement in their specific phobia following treatment. Treatment was equally effective regardless of co-occurring anxiety disorder. In addition, comorbid anxiety disorders improved following OST; however, this effect was not equal across groups. The SAD group showed poorer improvement in their comorbid disorder than the GAD group post-treatment. However, the SAD group continued to improve and this differential effect was not evident six-months following treatment. The current study sample was small, with insufficient power to detect small and medium effect sizes. Further, the sample only included a portion of individuals with primary GAD or SAD, which may have attenuated the findings. The current study demonstrated that co-occurring anxiety disorders did not interfere with phobia treatment. OST, despite targeting a single specific phobia type, significantly reduced comorbid symptomatology across multiple anxiety disorders. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Application of a TiO2 nanocomposite in earplugs, a case study of noise reduction.

    PubMed

    Ibrahimi Ghavamabadi, Leila; Fouladi Dehaghi, Behzad; Hesampour, Morteza; Ahmadi Angali, Kambiz

    2018-03-13

    Use of hearing protection devices (HPDs) has become necessary when other control measures cannot reduce noise to a safe and standard level. In most countries, more effective hearing protection devices are in demand. The aim of this study was to examine the effects of titanium dioxide (TiO 2 ) nanoparticles on noise reduction efficiency in a polyvinyl chloride (PVC) earplug. An S-60 type PVC polymer as main matrix and TiO 2 with 30 nm size were used. PVC/TiO 2 nanocomposite was mixed at a temperature of 160 °C and 40 rounds per minute (rpm) and the samples were prepared with 0, 0.2 and 0.5 wt% of TiO 2 nanoparticle concentrations. Earplug samples with PVC/TiO 2 (0.2, 0.5 wt%) nanoparticles, when compared with raw earplugs, showed almost equal noise attenuation at low frequencies (500- 125 Hz). However, at high frequencies (2-8 kHz), the power of noise reduction of earplugs containing TiO 2 nanoparticles was significantly increased. The results of the present study showed that samples containing nanoparticles of TiO 2 had more noticeable noise reduction abilities at higher frequencies in comparison with samples without the nanoparticles.

  11. First passage properties of a generalized Pólya urn

    NASA Astrophysics Data System (ADS)

    Kearney, Michael J.; Martin, Richard J.

    2016-12-01

    A generalized two-component Pólya urn process, parameterized by a variable α , is studied in terms of the likelihood that due to fluctuations the initially smaller population in a scenario of competing population growth eventually becomes the larger, or is the larger after a certain passage of time. By casting the problem as an inhomogeneous directed random walk we quantify this role-reversal phenomenon through the first passage probability that equality in size is first reached at a given time, and the related exit probability that equality in size is reached no later than a given time. Using an embedding technique, exact results are obtained which complement existing results and provide new insights into behavioural changes (akin to phase transitions) which occur at defined values of α .

  12. A tour of inequality

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2018-02-01

    This paper presents a concise and up-to-date tour to the realm of inequality indices. Originally devised for socioeconomic applications, inequality indices gauge the divergence of wealth distributions in human societies from the socioeconomic 'ground state' of perfect equality, i.e. pure communism. Inequality indices are quantitative scores that take values in the unit interval, with the zero score characterizing perfect equality. In effect, inequality indices are applicable in the context of general distributions of sizes - non-negative quantities such as count, length, area, volume, mass, energy, and duration. For general size distributions, which are omnipresent in science and engineering, inequality indices provide multi-dimensional and infinite-dimensional quantifications of the inherent inequality - i.e., the statistical heterogeneity, the non-determinism, the randomness. This paper compactly describes the insights and the practical implementation of inequality indices.

  13. Comparison of methods for estimating the attributable risk in the context of survival analysis.

    PubMed

    Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M

    2017-01-23

    The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.

  14. Use of Lead Isotopes to Identify Sources of Metal and Metalloid Contaminants in Atmospheric Aerosol from Mining Operations

    PubMed Central

    Félix, Omar I.; Csavina, Janae; Field, Jason; Rine, Kyle P.; Sáez, A. Eduardo; Betterton, Eric A.

    2014-01-01

    Mining operations are a potential source of metal and metalloid contamination by atmospheric particulate generated from smelting activities, as well as from erosion of mine tailings. In this work, we show how lead isotopes can be used for source apportionment of metal and metalloid contaminants from the site of an active copper mine. Analysis of atmospheric aerosol shows two distinct isotopic signatures: one prevalent in fine particles (< 1 μm aerodynamic diameter) while the other corresponds to coarse particles as well as particles in all size ranges from a nearby urban environment. The lead isotopic ratios found in the fine particles are equal to those of the mine that provides the ore to the smelter. Topsoil samples at the mining site show concentrations of Pb and As decreasing with distance from the smelter. Isotopic ratios for the sample closest to the smelter (650 m) and from topsoil at all sample locations, extending to more than 1 km from the smelter, were similar to those found in fine particles in atmospheric dust. The results validate the use of lead isotope signatures for source apportionment of metal and metalloid contaminants transported by atmospheric particulate. PMID:25496740

  15. Optimal background matching camouflage.

    PubMed

    Michalis, Constantine; Scott-Samuel, Nicholas E; Gibson, David P; Cuthill, Innes C

    2017-07-12

    Background matching is the most familiar and widespread camouflage strategy: avoiding detection by having a similar colour and pattern to the background. Optimizing background matching is straightforward in a homogeneous environment, or when the habitat has very distinct sub-types and there is divergent selection leading to polymorphism. However, most backgrounds have continuous variation in colour and texture, so what is the best solution? Not all samples of the background are likely to be equally inconspicuous, and laboratory experiments on birds and humans support this view. Theory suggests that the most probable background sample (in the statistical sense), at the size of the prey, would, on average, be the most cryptic. We present an analysis, based on realistic assumptions about low-level vision, that estimates the distribution of background colours and visual textures, and predicts the best camouflage. We present data from a field experiment that tests and supports our predictions, using artificial moth-like targets under bird predation. Additionally, we present analogous data for humans, under tightly controlled viewing conditions, searching for targets on a computer screen. These data show that, in the absence of predator learning, the best single camouflage pattern for heterogeneous backgrounds is the most probable sample. © 2017 The Authors.

  16. Effect of equal-channel angular pressing and annealing conditions on the texture, microstructure, and deformability of an MA2-1 alloy

    NASA Astrophysics Data System (ADS)

    Serebryany, V. N.; Ivanova, T. M.; Kopylov, V. I.; Dobatkin, S. V.; Pozdnyakova, N. N.; Pimenov, V. A.; Savelova, T. I.

    2010-07-01

    Equal-channel angular pressing (ECAP) of am MA2-1 alloy according to routes A and Bc is used to study the possibility of increasing the low-temperature deformability of the alloy due to grain refinement and a change in its texture. To separate the grain refinement effect from the effect of texture on the deformability of the alloy, samples after ECAP are subjected to recrystallization annealing that provides grain growth to the grain size characteristic of the initial state (IS) of the alloy. Upon ECAP, the average grain size is found to decrease to 2-2.4 μm and the initial sharp axial texture changes substantially (it decomposes into several scattered orientations). The type of orientations and the degree of their scattering depend on the type of ECAP routes. The detected change in the texture is accompanied by an increase in the deformability parameters (normal plastic anisotropy coefficient R, strain-hardening exponent n, relative uniform elongation δu) determined upon tensile tests at 20°C for the states of the alloy formed in the IS-4A-4Bc and IS-4Ao-4BcO sequences. The experimental values of R agree with the values calculated in terms of the Taylor model of plastic deformation in the Bishop-Hill approximation using quantitative texture data in the form of orientation distribution function coefficients with allowance for the activation of prismatic slip, especially for ECAP routes 4Bc and 4BcO. When the simulation results, the Hall-Petch relation, and the generalized Schmid factors are taken into account, a correlation is detected between the deformability parameter, the Hall-Petch coefficient, and the ratio of the critical shear stresses on prismatic and basal planes.

  17. New coding technique for computer generated holograms.

    NASA Technical Reports Server (NTRS)

    Haskell, R. E.; Culver, B. C.

    1972-01-01

    A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.

  18. The Tully-Fisher relation for 25 000 Sloan Digital Sky Survey galaxies as a function of environment

    NASA Astrophysics Data System (ADS)

    Mocz, P.; Green, A.; Malacari, M.; Glazebrook, K.

    2012-09-01

    We construct Tully-Fisher relationships (TFRs) in the u, g, r, i and z bands and stellar mass TFRs for a sample of 25 698 late spiral-type galaxies (with 0.045 < z < 0.085) from the Sloan Digital Sky Survey (SDSS) and study the effects of environment on the relation. We use SDSS-measured Balmer emission line widths, vFWHM, as a proxy for disc circular velocity, vcirc. A priori, it is not clear whether we can construct accurate TFRs given the small 3 arcsec diameter of the fibres used for SDSS spectroscopic measurements. However, we show by modelling the Hα emission profile as observed through a 3 arcsec aperture that for galaxies at appropriate redshifts (z > 0.045) the fibres sample enough of the disc to obtain a linear relationship between vFWHM and vcirc, allowing us to obtain a TFR and to investigate dependence on other variables. We also develop a methodology for distinguishing between astrophysical and sample bias in the fibre TFR trends. We observe the well-known steepening of the TFR in redder bands in our sample. We divide the sample of galaxies into four equal groups using projected neighbour density (Σ) quartiles and find no significant dependence on environment, extending previous work to a wider range of environments and a much larger sample. Having demonstrated that we can construct SDSS-based TFRs is very useful for future TFR studies because of the large sample size available in the SDSS.

  19. Thalassornectes (Alcidectes) aukletae, new subgenus and species (Acari: Hypoderatidae) from the crested and parakeet auklets (Aves: Charadriiformes; Alcidae).

    PubMed

    Pence, D B; Hoberg, E P

    1991-03-01

    In the genus Thalassornectes, a new subgenus, Alcidectes, and a new species, T. (Alcidectes) aukletae, are described from deutonymphs in the subcutaneous adipose tissue of the crested auklet, Aethia cristatella (Pallas), and the parakeet auklet, Cyclorrhynchus psittacula (Pallas), from the eastern Pacific USSR. The new subgenus and species differ from one or both of the single species in each of the other two subgenera, Thalassornectes and Rallidectes, by (1) the normal size, position, and parallel arrangement of the genital papillae; (2) the larger size of seta sce; (3) the greater length and stronger development of setae sci, d1, l1, h, and sh; (4) the equal size of tarsi III and IV or their size subequal, with tarsus IV slightly longer than tarsus III; (5) both epimera I and sternum well developed and nearly equal in length; and (6) the free sclerotized posteriad extension from epimerite II on the ventral cuticular surface. This is the first hypoderatid reported from the host order Charadriiformes. The distribution of T. (Alcidectes) aukletae among auklets may be attributed to either cospeciation or may have an ecological basis; data are insufficient at present to sustain either hypothesis.

  20. Antibodies against canine parvovirus of wolves of Minnesota: A serologic study from 1975 through 1985

    USGS Publications Warehouse

    Goyal, S.M.; Mech, L.D.; Rademacher, R.A.; Khan, M.A.; Seal, U.S.

    1986-01-01

    Serum samples (n = 137) from 47 wild wolves (Canis lupus; 21 pups and 26 adults) were evaluated from 1975 to 1985 for antibodies against canine parvovirus, using the hemagglutination inhibition (HI) test. In addition, several blood samples (n = 35) from 14 of these wolves (6 pups and 8 adults) were evaluated simultaneously for erythrocyte and leukocyte counts, and for hemoglobin and blood urea nitrogen concentrations. Sixty-nine (50%) of the serum samples (35 wolves) had HI titers of greater than or equal to 256, whereas 68 (50%) of the samples (16 wolves) had HI titers of less than or equal to 128. Significant differences in the geometric mean titers were not found between pups and adults or between males and females. Of the 47 wolves evaluated, 12 (25%) developed a greater than or equal to fourfold increase in antibody titers during the 11-year period, with 2 wolves developing serologic conversions in 1976. The data indicate that canine parvovirus may have begun infecting wolves before or at the same time that it began infecting the dog population in the United States.

  1. Americans misperceive racial economic equality.

    PubMed

    Kraus, Michael W; Rucker, Julian M; Richeson, Jennifer A

    2017-09-26

    The present research documents the widespread misperception of race-based economic equality in the United States. Across four studies ( n = 1,377) sampling White and Black Americans from the top and bottom of the national income distribution, participants overestimated progress toward Black-White economic equality, largely driven by estimates of greater current equality than actually exists according to national statistics. Overestimates of current levels of racial economic equality, on average, outstripped reality by roughly 25% and were predicted by greater belief in a just world and social network racial diversity (among Black participants). Whereas high-income White respondents tended to overestimate racial economic equality in the past, Black respondents, on average, underestimated the degree of past racial economic equality. Two follow-up experiments further revealed that making societal racial discrimination salient increased the accuracy of Whites' estimates of Black-White economic equality, whereas encouraging Whites to anchor their estimates on their own circumstances increased their tendency to overestimate current racial economic equality. Overall, these findings suggest a profound misperception of and unfounded optimism regarding societal race-based economic equality-a misperception that is likely to have any number of important policy implications.

  2. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  3. Effect of different pH solvents on micro-hardness and surface topography of dental nano-composite: An in vitro analysis

    PubMed Central

    Khan, Aftab Ahmed; Siddiqui, Adel Zia; Al-Kheraif, Abdulaziz A; Zahid, Ambreen; Divakar, Darshan Devang

    2015-01-01

    Objective: Erosion of tooth surface is attributed to recent shift in diet pattern and frequent use of beverages. The aim of this research was to evaluate the effects of different beverages on surface topography and hardness of nano-filled composite material. Methods: Sixty flat disc shaped resin composite samples were fabricated and placed in distilled water for 24 hours. After 24 hours test samples were dried and divided into 4 groups. Group A (n=15) specimens were placed in tight amber bottle comprising 25 ml of artificial saliva. Similarly Group B, C and D were stored in equal amounts of orange juice, milk and coca cola drink respectively. Samples were checked for hardness and surface changes were evaluated with scanning electron microscopy. Results: There were strong significant difference observed in samples immersed in orange juice and artificial saliva. A strong significant difference was seen between Group D and Group A. Group A and Group C showed no significant difference. The micro-hardness test showed reduced values among all samples. Conclusion: Beverages consumed daily have a negative influence on hardness and surface degradation of nano-filled dental composite. Comparatively, nano-filled composites possess higher surface area to volume ratio of their fillers particle size may lead to higher surface roughness than other resin based dental biomaterials. PMID:26430417

  4. Optimal sampling design for estimating spatial distribution and abundance of a freshwater mussel population

    USGS Publications Warehouse

    Pooler, P.S.; Smith, D.R.

    2005-01-01

    We compared the ability of simple random sampling (SRS) and a variety of systematic sampling (SYS) designs to estimate abundance, quantify spatial clustering, and predict spatial distribution of freshwater mussels. Sampling simulations were conducted using data obtained from a census of freshwater mussels in a 40 X 33 m section of the Cacapon River near Capon Bridge, West Virginia, and from a simulated spatially random population generated to have the same abundance as the real population. Sampling units that were 0.25 m 2 gave more accurate and precise abundance estimates and generally better spatial predictions than 1-m2 sampling units. Systematic sampling with ???2 random starts was more efficient than SRS. Estimates of abundance based on SYS were more accurate when the distance between sampling units across the stream was less than or equal to the distance between sampling units along the stream. Three measures for quantifying spatial clustering were examined: Hopkins Statistic, the Clumping Index, and Morisita's Index. Morisita's Index was the most reliable, and the Hopkins Statistic was prone to false rejection of complete spatial randomness. SYS designs with units spaced equally across and up stream provided the most accurate predictions when estimating the spatial distribution by kriging. Our research indicates that SYS designs with sampling units equally spaced both across and along the stream would be appropriate for sampling freshwater mussels even if no information about the true underlying spatial distribution of the population were available to guide the design choice. ?? 2005 by The North American Benthological Society.

  5. Re-use of pilot data and interim analysis of pivotal data in MRMC studies: a simulation study

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Samuelson, Frank; Sahiner, Berkman; Petrick, Nicholas

    2017-03-01

    Novel medical imaging devices are often evaluated with multi-reader multi-case (MRMC) studies in which radiologists read images of patient cases for a specified clinical task (e.g., cancer detection). A pilot study is often used to measure the effect size and variance parameters that are necessary for sizing a pivotal study (including sizing readers, non-diseased and diseased cases). Due to the practical difficulty of collecting patient cases or recruiting clinical readers, some investigators attempt to include the pilot data as part of their pivotal study. In other situations, some investigators attempt to perform an interim analysis of their pivotal study data based upon which the sample sizes may be re-estimated. Re-use of the pilot data or interim analyses of the pivotal data may inflate the type I error of the pivotal study. In this work, we use the Roe and Metz model to simulate MRMC data under the null hypothesis (i.e., two devices have equal diagnostic performance) and investigate the type I error rate for several practical designs involving re-use of pilot data or interim analysis of pivotal data. Our preliminary simulation results indicate that, under the simulation conditions we investigated, the inflation of type I error is none or only marginal for some design strategies (e.g., re-use of patient data without re-using readers, and size re-estimation without using the effect-size estimated in the interim analysis). Upon further verifications, these are potentially useful design methods in that they may help make a study less burdensome and have a better chance to succeed without substantial loss of the statistical rigor.

  6. Size effects and electron microscopy of thin metal films. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hernandez, J. D.

    1978-01-01

    All films were deposited by resistive heated evaporation in an oil diffusion pumped vacuum system (ultimate approx. equal to 0.0000001 torr). The growth from nuclei to a continuous film is highly dependent on the deposition parameters, evaporation rate as well as substrate material and substrate temperature. The growth stages of a film and the dependence of grain size on various deposition and annealing parameters are shown. Resistivity measurements were taken on thin films to observe size effects.

  7. Updated Neuronal Scaling Rules for the Brains of Glires (Rodents/Lagomorphs)

    PubMed Central

    Herculano-Houzel, Suzana; Ribeiro, Pedro; Campos, Leandro; Valotta da Silva, Alexandre; Torres, Laila B.; Catania, Kenneth C.; Kaas, Jon H.

    2011-01-01

    Brain size scales as different functions of its number of neurons across mammalian orders such as rodents, primates, and insectivores. In rodents, we have previously shown that, across a sample of 6 species, from mouse to capybara, the cerebral cortex, cerebellum and the remaining brain structures increase in size faster than they gain neurons, with an accompanying decrease in neuronal density in these structures [Herculano-Houzel et al.: Proc Natl Acad Sci USA 2006;103:12138–12143]. Important remaining questions are whether such neuronal scaling rules within an order apply equally to all pertaining species, and whether they extend to closely related taxa. Here, we examine whether 4 other species of Rodentia, as well as the closely related rabbit (Lagomorpha), conform to the scaling rules identified previously for rodents. We report the updated neuronal scaling rules obtained for the average values of each species in a way that is directly comparable to the scaling rules that apply to primates [Gabi et al.: Brain Behav Evol 2010;76:32–44], and examine whether the scaling relationships are affected when phylogenetic relatedness in the dataset is accounted for. We have found that the brains of the spiny rat, squirrel, prairie dog and rabbit conform to the neuronal scaling rules that apply to the previous sample of rodents. The conformity to the previous rules of the new set of species, which includes the rabbit, suggests that the cellular scaling rules we have identified apply to rodents in general, and probably to Glires as a whole (rodents/lagomorphs), with one notable exception: the naked mole-rat brain is apparently an outlier, with only about half of the neurons expected from its brain size in its cerebral cortex and cerebellum. PMID:21985803

  8. Expediting Tax Deposits Can Increase the Government’s Interest Earnings.

    DTIC Science & Technology

    1983-11-21

    the FTD system. These deposits included such tax receipts as withheld personal income tax, corporate income tax , and social security, excise...Greater than or 159,500 2,733 4.2 571,000 222,000 equal to $25,000 Total $1,279,000 Corporate Income Tax Payments Projected Sampling Average Number...than or 130,600 2,215 2.7 292,000 80,000 equal to $25,000 Total $779,000 Corporate Income Tax Payments Projected Sampling Average Number Average

  9. Perovskite-Type Oxides. I. Structural, Magnetic, and Morphological Properties of LaMn 1- xCu xO 3 and LaCo 1- xCu xO 3 Solid Solutions with Large Surface Area

    NASA Astrophysics Data System (ADS)

    Porta, Piero; De Rossi, Sergio; Faticanti, Marco; Minelli, Giuliano; Pettiti, Ida; Lisi, Luciana; Turco, Maria

    1999-09-01

    Perovskite-type compounds of general formula LaMn1-xCuxO3 and LaCo1-xCuxO3 (x=0.0, 0.2, 0.4, 0.6, 0.8, 1.0) were prepared by calcining the citrate gel precursors at 823, 923, and 1073 K. The decomposition of the precursors was followed by thermal analysis and the oxides were investigated by means of elemental analysis (atomic absorption and redox titration), X-ray powder diffraction, BET surface area, X-ray absorption (EXAFS and XANES), electron microscopy (SEM and TEM), and magnetic susceptibility. LaMn1-xCuxO3 samples are perovskite-like single phases up to x=0.6. At x=0.8 CuO and La2CuO4 phases are present in addition to perovskite. For x=1.0 the material is formed by CuO and La2CuO4. Mn(IV) was found by redox titration in all Mn-based perovskite samples, its fraction increasing with the increase in copper content. EXAFS and XANES analyses confirmed the presence of Mn(IV). Cation vacancies in equal amounts in the 12-coordinated A and octahedral B sites are suggested in the samples with x=0.0 and x=0.2, while for x=0.6 anionic vacancies are present. Materials with sufficiently high surface area (22-36 m2 g-1 for samples fired at 923 K and 14-22 m2 g-1 for those fired at 1073 K) were obtained. Crystallite sizes in the ranges 390-500 and 590-940 Å for samples calcined at 923 and 1073 K, respectively, were determined from the FWHM of the (102) X-ray diffraction peak. TEM patterns of LaMnO3 showed almost regular hexagonal prismatic crystals with sizes of the same order of magnitude (800 Å) of those drawn from X-ray diffraction, while no evidence of defect clustering was drawn out from TEM and electron diffraction images. For the sample with x=0.6, TEM and electron diffraction patterns revealed perturbation of the structure. Magnetic susceptibility studies show a ferromagnetic behavior that decreases with increase in x. LaCo1-xCuxO3 samples are perovskite-like single phases up to x=0.2. For x=0.4 a small amount of La2CuO4, in addition to perovskite, is detected. For x≥0.6 massive formation of La2CuO4 and CuO is observed. Only trivalent cobalt is found by redox titration. Magnetic susceptibility studies have shown that trivalent cobalt is present in all samples as a mixture of paramagnetic Co3+ and diamagnetic CoIII ions, the Co3+ fraction being, at least up to x=0.4, equal to ≈0.34. Antiferromagnetic behavior, which increases with increase in x, is observed in all LaCo1-xCuxO3 samples. LaCoO3 is a stoichiometric perovskite. The substitution of cobalt by Cu2+ leads to a positive charge defectivity which is compensated by oxygen vacancies. EXAFS and XANES analyses confirmed the presence of trivalent cobalt. Materials with fairly high surface area (in the ranges 19-27 and 13-21 m2 g-1 for samples calcined at 923 and 1073 K, respectively) were obtained. Crystallite sizes of ≈400 and ≈1000 Å for samples calcined at 923 and 1073 K, respectively, were determined from the FWHM of the (102) X-ray diffraction peak. Materials with not very clear morphology and crystals with definite structure are distinguishable by SEM for samples calcined at 1073 and at 1273 K, respectively. TEM patterns, for samples calcined at 1073 K, evidence almost regular hexagonal prismatic crystals connected to form "linked structures" and some "spotty crystals," suggesting short-range ordered local defects. For copper-containing samples, calcined at 1273 K, a higher degree of defectivity (probably associated with the interaction of anion vacancies) and the occurrence of "planar faults" are shown by TEM.

  10. A meta-analytic review of overgeneral memory: The role of trauma history, mood, and the presence of posttraumatic stress disorder.

    PubMed

    Ono, Miyuki; Devilly, Grant J; Shum, David H K

    2016-03-01

    A number of studies suggest that a history of trauma, depression, and posttraumatic stress disorder (PTSD) are associated with autobiographical memory deficits, notably overgeneral memory (OGM). However, whether there are any group differences in the nature and magnitude of OGM has not been evaluated. Thus, a meta-analysis was conducted to quantify group differences in OGM. The effect sizes were pooled from studies examining the effect on OGM from a history of trauma (e.g., childhood sexual abuse), and the presence of PTSD or current depression (e.g., major depressive disorder). Using multiple search engines, 13 trauma studies and 12 depression studies were included in this review. A depression effect was observed on OGM with a large effect size, and was more evident by the lack of specific memories, especially to positive cues. An effect of trauma history on OGM was observed with a medium effect size, and this was most evident by the presence of overgeneral responses to negative cues. The results also suggested an amplified memory deficit in the presence of PTSD. That is, the effect sizes of OGM among individuals with PTSD were very large and relatively equal across different types of OGM. Future studies that directly compare the differences of OGM among 4 samples (i.e., controls, current depression without trauma history, trauma history without depression, and trauma history and depression) would be warranted to verify the current findings. (c) 2016 APA, all rights reserved).

  11. TWO-PHASE FORMATION IN SOLUTIONS OF TOBACCO MOSAIC VIRUS AND THE PROBLEM OF LONG-RANGE FORCES

    PubMed Central

    Oster, Gerald

    1950-01-01

    In a nearly salt-free medium, a dilute tobacco mosaic virus solution of rod-shaped virus particles of uniform length forms two phases; the bottom optically anisotropic phase has a greater virus concentration than has the top optically isotropic phase. For a sample containing particles of various lengths, the bottom phase contains longer particles than does the top and the concentrations top and bottom are nearly equal. The longer the particles the less the minimum concentration necessary for two-phase formation. Increasing the salt concentration increases the minimum concentration. The formation of two phases is explained in terms of geometrical considerations without recourse to the concept of long-range attractive forces. The minimum concentration for two-phase formation is that concentration at which correlation in orientation between the rod-shaped particles begins to take place. This concentration is determined by the thermodynamically effective size and shape of the particles as obtained from the concentration dependence of the osmotic pressure of the solutions measured by light scattering. The effective volume of the particles is introduced into the theory of Onsager for correlation of orientation of uniform size rods and good agreement with experiment is obtained. The theory is extended to a mixture of non-uniform size rods and to the case in which the salt concentration is varied, and agreement with experiment is obtained. The thermodynamically effective volume of the particles and its dependence on salt concentration are explained in terms of the shape of the particles and the electrostatic repulsion between them. Current theories of the hydration of proteins and of long-range forces are critically discussed. The bottom layer of freshly purified tobacco mosaic virus samples shows Bragg diffraction of visible light. The diffraction data indicate that the virus particles in solution form three-dimensional crystals approximately the size of crystalline inclusion bodies found in the cells of plants suffering from the disease. PMID:15422102

  12. Mapping Students' Spoken Conceptions of Equality

    ERIC Educational Resources Information Center

    Anakin, Megan

    2013-01-01

    This study expands contemporary theorising about students' conceptions of equality. A nationally representative sample of New Zealand students' were asked to provide a spoken numerical response and an explanation as they solved an arithmetic additive missing number problem. Students' responses were conceptualised as acts of communication and…

  13. Design and maintenance of a network for collecting high-resolution suspended-sediment data at remote locations on rivers, with examples from the Colorado River

    USGS Publications Warehouse

    Griffiths, Ronald E.; Topping, David J.; Andrews, Timothy; Bennett, Glenn E.; Sabol, Thomas A.; Melis, Theodore S.

    2012-01-01

    Management of sand and finer sediment in fluvial settings has become increasingly important for reasons ranging from endangered-species habitat to transport of sediment-associated contaminants. In all rivers, some fraction of the suspended load is transported as washload, and some as suspended bed material. Typically, the washload is composed of silt-and-clay-size sediment, and the suspended bed material is composed of sand-size sediment. In most rivers, as a result of changes in the upstream supply of silt and clay, large, systematic changes in the concentration of the washload occur over time, independent of changes in water discharge. Recent work has shown that large, systematic, discharge-independent changes in the concentration of the suspended bed material are also present in many rivers. In bedrock canyon rivers, such as the Colorado River in Grand Canyon National Park, changes in the upstream tributary supply of sand may cause large changes in the grain-size distribution of the bed sand, resulting in changes in both the concentration and grain-size distribution of the sand in suspension. Large discharge-independent changes in suspended-sand concentration coupled to discharge-independent changes in the grain-size distribution of the suspended sand are not unique to bedrock canyon rivers, but also occur in large alluvial rivers, such as the Mississippi River. These systematic changes in either suspended-silt-and-clay concentration or suspended-sand concentration may not be detectable by using conventional equal-discharge- or equal-width-increment measurements, which may be too infrequently collected relative to the time scale over which these changes in the sediment load are occurring. Furthermore, because large discharge-independent changes in both suspended-silt-and-clay and suspended-sand concentration are possible in many rivers, methods using water discharge as a proxy for suspended-sediment concentration (such as sediment rating curves) may not produce sufficiently accurate estimates of sediment loads. Finally, conventional suspended-sediment measurements are both labor and cost intensive and may not be possible at the resolution required to resolve discharge-independent changes in suspended-sediment concentration, especially in more remote locations. For these reasons, the U.S. Geological Survey has pursued the use of surrogate technologies (such as acoustic and laser diffraction) for providing higher-resolution measurements of suspended-sediment concentration and grain size than are possible by using conventional suspended-sediment measurements alone. These factors prompted the U.S. Geological Survey's Grand Canyon Monitoring and Research Center to design and construct a network to automatically measure suspended-sediment transport at 15-minute intervals by using acoustic and laser-diffraction surrogate technologies at remote locations along the Colorado River within Marble and Grand Canyons in Grand Canyon National Park. Because of the remoteness of the Colorado River in this reach, this network also included the design of a broadband satellite-telemetry system to communicate with the instruments deployed at each station in this network. Although the sediment-transport monitoring network described in this report was developed for the Colorado River in Grand Canyon National Park, the design of this network can easily be adapted for use on other rivers, no matter how remote. In the Colorado River case-study example described in this report, suspended-sediment concentration and grain size are measured at five remote stations. At each of these stations, surrogate measurements of suspended-sediment concentration and grain size are made at 15-minute intervals using an array of different single-frequency acoustic-Doppler side-looking profilers. Laser-diffraction instruments are also used at two of these stations to measure both suspended-sediment concentrations and grain-size distributions. Cross-section calibrations of these instruments have been constructed and verified by using either equal-discharge-increment (EDI) or equal-width-increment (EWI) measurements of the velocity-weighted suspended-sediment concentration and grain-size distribution. The suspended-silt-and-clay concentration parts of these calibration relations have also included information from EDI- or EWI-calibrated samples collected by automatic pump samplers. Three of the monitoring stations are equipped with two-way satellite broadband telemetry systems that operate once a day to remotely monitor and program the instruments and download data. Data from these stations are typically downloaded twice per month; data from stations without satellite-telemetry systems are downloaded during site visits, which occur every 2 months or semiannually, depending on the remoteness of the site. Upon downloading and processing, suspended-silt-and-clay concentration, suspended-sand concentration, and suspended-sand median grain size are posted on the World Wide Web. Satellite telemetry in combination with the high-resolution sediment surrogate measurements can generate near-real-time suspended-sediment-concentration and grain-size data (limited only by the time required to download the instruments and process the data). The approach for measuring suspended-sediment concentration and grain size using this monitoring network is more practical, and can be done at a much lower cost and with higher temporal resolution, than any other method.

  14. Digital second-order phase-locked loop

    NASA Technical Reports Server (NTRS)

    Holes, J. K.; Carl, C.; Tegnelia, C. R. (Inventor)

    1973-01-01

    A digital second-order phase-locked loop is disclosed in which a counter driven by a stable clock pulse source is used to generate a reference waveform of the same frequency as an incoming waveform, and to sample the incoming waveform at zero-crossover points. The samples are converted to digital form and accumulated over M cycles, reversing the sign of every second sample. After every M cycles, the accumulated value of samples is hard limited to a value SGN = + or - 1 and multiplied by a value delta sub 1 equal to a number of n sub 1 of fractions of a cycle. An error signal is used to advance or retard the counter according to the sign of the sum by an amount equal to the sum.

  15. Zonal wavefront estimation using an array of hexagonal grating patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pathak, Biswajit, E-mail: b.pathak@iitg.ernet.in, E-mail: brboruah@iitg.ernet.in; Boruah, Bosanta R., E-mail: b.pathak@iitg.ernet.in, E-mail: brboruah@iitg.ernet.in

    2014-10-15

    Accuracy of Shack-Hartmann type wavefront sensors depends on the shape and layout of the lenslet array that samples the incoming wavefront. It has been shown that an array of gratings followed by a focusing lens provide a substitution for the lensslet array. Taking advantage of the computer generated holography technique, any arbitrary diffraction grating aperture shape, size or pattern can be designed with little penalty for complexity. In the present work, such a holographic technique is implemented to design regular hexagonal grating array to have zero dead space between grating patterns, eliminating the possibility of leakage of wavefront during themore » estimation of the wavefront. Tessellation of regular hexagonal shape, unlike other commonly used shapes, also reduces the estimation error by incorporating more number of neighboring slope values at an equal separation.« less

  16. Who Is Doing the Housework in Multicultural Britain?

    PubMed Central

    Kan, Man-Yee; Laurie, Heather

    2016-01-01

    There is an extensive literature on the domestic division of labour within married and cohabiting couples and its relationship to gender equality within the household and the labour market. Most UK research focuses on the white majority population or is ethnicity ‘blind’, effectively ignoring potentially significant intersections between gender, ethnicity, socio-economic position and domestic labour. Quantitative empirical research on the domestic division of labour across ethnic groups has not been possible due to a lack of data that enables disaggregation by ethnic group. We address this gap using data from a nationally representative panel survey, Understanding Society, the UK Household Longitudinal Study containing sufficient sample sizes of ethnic minority groups for meaningful comparisons. We find significant variations in patterns of domestic labour by ethnic group, gender, education and employment status after controlling for individual and household characteristics. PMID:29416186

  17. Development of kenaf mat for slope stabilization

    NASA Astrophysics Data System (ADS)

    Ahmad, M. M.; Manaf, M. B. H. Ab; Zainol, N. Z.

    2017-09-01

    This study focusing on the ability of kenaf mat to act as reinforcement to laterite compared to the conventional geosynthetic in term of stabilizing the slope. Kenaf mat specimens studied in this paper are made up from natural kenaf fiber with 3mm thickness, 150mm length and 20mm width. With the same size of specimens, geosynthetic that obtain from the industry are being tested for both direct shear and tensile tests. Plasticity index of the soil sample used is equal to 13 which indicate that the soil is slightly plastic. Result shows that the friction angle of kenaf mat is higher compared to friction between soil particles itself. In term of resistance to tensile load, the tensile strength of kenaf mat is 0.033N/mm2 which is lower than the tensile strength of geosynthetic.

  18. Study of iron-borate materials systems processed in space

    NASA Technical Reports Server (NTRS)

    Neilson, G. F.

    1978-01-01

    It was calculated that an FeBO3B2O3 glass-ceramic containing only 1 mole% FeBO3 would be equivalent for magnetooptic application to a YIG crystal of equal thickness. An Fe2O3B2O3 composition containing 2 mole% FeBO3 equivalent (98B) could be converted largely to a dense green, though opaque, FeBO3 glass-ceramic through suitable heat treatments. However, phase separation (and segregation) and Fe+3 reduction could not be entirely avoided with the various procedures that were employed. From light scattering calculations, it was estimated that about 100 A to allow 90% light transmission through a 1 cm thick sample. However, the actual FeBO3 crystallite sizes obtained in 98B were of the order of 1 micron or greater.

  19. Radiation damage in single-particle cryo-electron microscopy: effects of dose and dose rate.

    PubMed

    Karuppasamy, Manikandan; Karimi Nejadasl, Fatemeh; Vulovic, Milos; Koster, Abraham J; Ravelli, Raimond B G

    2011-05-01

    Radiation damage is an important resolution limiting factor both in macromolecular X-ray crystallography and cryo-electron microscopy. Systematic studies in macromolecular X-ray crystallography greatly benefited from the use of dose, expressed as energy deposited per mass unit, which is derived from parameters including incident flux, beam energy, beam size, sample composition and sample size. In here, the use of dose is reintroduced for electron microscopy, accounting for the electron energy, incident flux and measured sample thickness and composition. Knowledge of the amount of energy deposited allowed us to compare doses with experimental limits in macromolecular X-ray crystallography, to obtain an upper estimate of radical concentrations that build up in the vitreous sample, and to translate heat-transfer simulations carried out for macromolecular X-ray crystallography to cryo-electron microscopy. Stroboscopic exposure series of 50-250 images were collected for different incident flux densities and integration times from Lumbricus terrestris extracellular hemoglobin. The images within each series were computationally aligned and analyzed with similarity metrics such as Fourier ring correlation, Fourier ring phase residual and figure of merit. Prior to gas bubble formation, the images become linearly brighter with dose, at a rate of approximately 0.1% per 10 MGy. The gradual decomposition of a vitrified hemoglobin sample could be visualized at a series of doses up to 5500 MGy, by which dose the sample was sublimed. Comparison of equal-dose series collected with different incident flux densities showed a dose-rate effect favoring lower flux densities. Heat simulations predict that sample heating will only become an issue for very large dose rates (50 e(-)Å(-2) s(-1) or higher) combined with poor thermal contact between the grid and cryo-holder. Secondary radiolytic effects are likely to play a role in dose-rate effects. Stroboscopic data collection combined with an improved understanding of the effects of dose and dose rate will aid single-particle cryo-electron microscopists to have better control of the outcome of their experiments.

  20. Radiation damage in single-particle cryo-electron microscopy: effects of dose and dose rate

    PubMed Central

    Karuppasamy, Manikandan; Karimi Nejadasl, Fatemeh; Vulovic, Milos; Koster, Abraham J.; Ravelli, Raimond B. G.

    2011-01-01

    Radiation damage is an important resolution limiting factor both in macromolecular X-ray crystallography and cryo-electron microscopy. Systematic studies in macromolecular X-ray crystallography greatly benefited from the use of dose, expressed as energy deposited per mass unit, which is derived from parameters including incident flux, beam energy, beam size, sample composition and sample size. In here, the use of dose is reintroduced for electron microscopy, accounting for the electron energy, incident flux and measured sample thickness and composition. Knowledge of the amount of energy deposited allowed us to compare doses with experimental limits in macromolecular X-ray crystallography, to obtain an upper estimate of radical concentrations that build up in the vitreous sample, and to translate heat-transfer simulations carried out for macromolecular X-ray crystallography to cryo-electron microscopy. Stroboscopic exposure series of 50–250 images were collected for different incident flux densities and integration times from Lumbricus terrestris extracellular hemoglobin. The images within each series were computationally aligned and analyzed with similarity metrics such as Fourier ring correlation, Fourier ring phase residual and figure of merit. Prior to gas bubble formation, the images become linearly brighter with dose, at a rate of approximately 0.1% per 10 MGy. The gradual decomposition of a vitrified hemoglobin sample could be visualized at a series of doses up to 5500 MGy, by which dose the sample was sublimed. Comparison of equal-dose series collected with different incident flux densities showed a dose-rate effect favoring lower flux densities. Heat simulations predict that sample heating will only become an issue for very large dose rates (50 e−Å−2 s−1 or higher) combined with poor thermal contact between the grid and cryo-holder. Secondary radiolytic effects are likely to play a role in dose-rate effects. Stroboscopic data collection combined with an improved understanding of the effects of dose and dose rate will aid single-particle cryo-electron microscopists to have better control of the outcome of their experiments. PMID:21525648

  1. Feedback Augmented Sub-Ranging (FASR) Quantizer

    NASA Technical Reports Server (NTRS)

    Guilligan, Gerard

    2012-01-01

    This innovation is intended to reduce the size, power, and complexity of pipeline analog-to-digital converters (ADCs) that require high resolution and speed along with low power. Digitizers are important components in any application where analog signals (such as light, sound, temperature, etc.) need to be digitally processed. The innovation implements amplification of a sampled residual voltage in a switched capacitor amplifier stage that does not depend on charge redistribution. The result is less sensitive to capacitor mismatches that cause gain errors, which are the main limitation of such amplifiers in pipeline ADCs. The residual errors due to mismatch are reduced by at least a factor of 16, which is equivalent to at least 4 bits of improvement. The settling time is also faster because of a higher feedback factor. In traditional switched capacitor residue amplifiers, closed-loop amplification of a sampled and held residue signal is achieved by redistributing sampled charge onto a feedback capacitor around a high-gain transconductance amplifier. The residual charge that was sampled during the acquisition or sampling phase is stored on two or more capacitors, often equal in value or integral multiples of each other. During the hold or amplification phase, all of the charge is redistributed onto one capacitor in the feedback loop of the amplifier to produce an amplified voltage. The key error source is the non-ideal ratios of feedback and input capacitors caused by manufacturing tolerances, called mismatches. The mismatches cause non-ideal closed-loop gain, leading to higher differential non-linearity. Traditional solutions to the mismatch errors are to use larger capacitor values (than dictated by thermal noise requirements) and/or complex calibration schemes, both of which increase the die size and power dissipation. The key features of this innovation are (1) the elimination of the need for charge redistribution to achieve an accurate closed-loop gain of two, (2) a higher feedback factor in the amplifier stage giving a higher closed-loop bandwidth compared to the prior art, and (3) reduced requirement for calibration. The accuracy of the new amplifier is mainly limited by the sampling networks parasitic capacitances, which should be minimized in relation to the sampling capacitors.

  2. Adequacy of different experimental designs for eucalyptus spacing trials in Portuguese environmental conditions

    Treesearch

    Paula Soares; Margarida Tome

    2000-01-01

    In Portugal, several eucalyptus spacing trials cover a relatively broad range of experimental designs: trials with a non-randomized block design with plots of different size and number of trees per plot; trials based on a non-systematic design in which spacings were randomized resulting in a factorial arrangement with plots of different size and shape and equal number...

  3. Size and weight graded multi-ply laminar electrodes

    DOEpatents

    Liu, Chia-Tsun; Demczyk, Brian G.; Rittko, Irvin R.

    1984-01-01

    An electrode is made comprising a porous backing sheet, and attached thereto a catalytically active layer having an electrolyte permeable side and a backing layer contacting side, where the active layer comprises a homogeneous mixture of active hydrophobic and hydrophilic agglomerates with catalyst disposed equally throughout the active layer, and where the agglomerate size increases from the electrolyte permeable side to the backing sheet contacting side.

  4. Americans misperceive racial economic equality

    PubMed Central

    Kraus, Michael W.; Rucker, Julian M.; Richeson, Jennifer A.

    2017-01-01

    The present research documents the widespread misperception of race-based economic equality in the United States. Across four studies (n = 1,377) sampling White and Black Americans from the top and bottom of the national income distribution, participants overestimated progress toward Black–White economic equality, largely driven by estimates of greater current equality than actually exists according to national statistics. Overestimates of current levels of racial economic equality, on average, outstripped reality by roughly 25% and were predicted by greater belief in a just world and social network racial diversity (among Black participants). Whereas high-income White respondents tended to overestimate racial economic equality in the past, Black respondents, on average, underestimated the degree of past racial economic equality. Two follow-up experiments further revealed that making societal racial discrimination salient increased the accuracy of Whites’ estimates of Black–White economic equality, whereas encouraging Whites to anchor their estimates on their own circumstances increased their tendency to overestimate current racial economic equality. Overall, these findings suggest a profound misperception of and unfounded optimism regarding societal race-based economic equality—a misperception that is likely to have any number of important policy implications. PMID:28923915

  5. Removal of haloacetic acids from swimming pool water by reverse osmosis and nanofiltration.

    PubMed

    Yang, Linyan; She, Qianhong; Wan, Man Pun; Wang, Rong; Chang, Victor W-C; Tang, Chuyang Y

    2017-06-01

    Recent studies report high concentrations of haloacetic acids (HAAs), a prevalent class of toxic disinfection by-products, in swimming pool water (SPW). We investigated the removal of 9 HAAs by four commercial reverse osmosis (RO) and nanofiltration (NF) membranes. Under typical SPW conditions (pH 7.5 and 50 mM ionic strength), HAA rejections were >60% for NF270 with molecular weight cut-off (MWCO) equal to 266 Da and equal or higher than 90% for XLE, NF90 and SB50 with MWCOs of 96, 118 and 152 Da, respectively, as a result of the combined effects of size exclusion and charge repulsion. We further included 7 neutral hydrophilic surrogates as molecular probes to resolve the rejection mechanisms. In the absence of strong electrostatic interaction (e.g., pH 3.5), the rejection data of HAAs and surrogates by various membranes fall onto an identical size-exclusion (SE) curve when plotted against the relative-size parameter, i.e., the ratio of molecular radius over membrane pore radius. The independence of this SE curve on molecular structures and membrane properties reveals that the relative-size parameter is a more fundamental SE descriptor compared to molecular weight. An effective molecular size with the Stokes radius accounting for size exclusion and the Debye length accounting for electrostatic interaction was further used to evaluate the rejection. The current study provides valuable insights on the rejection of trace contaminants by RO/NF membranes. Copyright © 2017. Published by Elsevier Ltd.

  6. Insights into the role of protein molecule size and structure on interfacial properties using designed sequences

    PubMed Central

    Dwyer, Mirjana Dimitrijev; He, Lizhong; James, Michael; Nelson, Andrew; Middelberg, Anton P. J.

    2013-01-01

    Mixtures of a large, structured protein with a smaller, unstructured component are inherently complex and hard to characterize at interfaces, leading to difficulties in understanding their interfacial behaviours and, therefore, formulation optimization. Here, we investigated interfacial properties of such a mixed system. Simplicity was achieved using designed sequences in which chemical differences had been eliminated to isolate the effect of molecular size and structure, namely a short unstructured peptide (DAMP1) and its longer structured protein concatamer (DAMP4). Interfacial tension measurements suggested that the size and bulk structuring of the larger molecule led to much slower adsorption kinetics. Neutron reflectometry at equilibrium revealed that both molecules adsorbed as a monolayer to the air–water interface (indicating unfolding of DAMP4 to give a chain of four connected DAMP1 molecules), with a concentration ratio equal to that in the bulk. This suggests the overall free energy of adsorption is equal despite differences in size and bulk structure. At small interfacial extensional strains, only molecule packing influenced the stress response. At larger strains, the effect of size became apparent, with DAMP4 registering a higher stress response and interfacial elasticity. When both components were present at the interface, most stress-dissipating movement was achieved by DAMP1. This work thus provides insights into the role of proteins' molecular size and structure on their interfacial properties, and the designed sequences introduced here can serve as effective tools for interfacial studies of proteins and polymers. PMID:23303222

  7. The effect of ultrafast laser wavelength on ablation properties and implications on sample introduction in inductively coupled plasma mass spectrometry

    PubMed Central

    LaHaye, N. L.; Harilal, S. S.; Diwakar, P. K.; Hassanein, A.; Kulkarni, P.

    2015-01-01

    We investigated the role of femtosecond (fs) laser wavelength on laser ablation (LA) and its relation to laser generated aerosol counts and particle distribution, inductively coupled plasma-mass spectrometry (ICP-MS) signal intensity, detection limits, and elemental fractionation. Four different NIST standard reference materials (610, 613, 615, and 616) were ablated using 400 nm and 800 nm fs laser pulses to study the effect of wavelength on laser ablation rate, accuracy, precision, and fractionation. Our results show that the detection limits are lower for 400 nm laser excitation than 800 nm laser excitation at lower laser energies but approximately equal at higher energies. Ablation threshold was also found to be lower for 400 nm than 800 nm laser excitation. Particle size distributions are very similar for 400 nm and 800 nm wavelengths; however, they differ significantly in counts at similar laser fluence levels. This study concludes that 400 nm LA is more beneficial for sample introduction in ICP-MS, particularly when lower laser energies are to be used for ablation. PMID:26640294

  8. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  9. Statistical Inference for Data Adaptive Target Parameters.

    PubMed

    Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J

    2016-05-01

    Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.

  10. Quality and loudness judgments for music subjected to compression limiting.

    PubMed

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2012-08-01

    Dynamic-range compression (DRC) is used in the music industry to maximize loudness. The amount of compression applied to commercial recordings has increased over time due to a motivating perspective that louder music is always preferred. In contrast to this viewpoint, artists and consumers have argued that using large amounts of DRC negatively affects the quality of music. However, little research evidence has supported the claims of either position. The present study investigated how DRC affects the perceived loudness and sound quality of recorded music. Rock and classical music samples were peak-normalized and then processed using different amounts of DRC. Normal-hearing listeners rated the processed and unprocessed samples on overall loudness, dynamic range, pleasantness, and preference, using a scaled paired-comparison procedure in two conditions: un-equalized, in which the loudness of the music samples varied, and loudness-equalized, in which loudness differences were minimized. Results indicated that a small amount of compression was preferred in the un-equalized condition, but the highest levels of compression were generally detrimental to quality, whether loudness was equalized or varied. These findings are contrary to the "louder is better" mentality in the music industry and suggest that more conservative use of DRC may be preferred for commercial music.

  11. The Foundation Supernova Survey: motivation, design, implementation, and first data release

    NASA Astrophysics Data System (ADS)

    Foley, Ryan J.; Scolnic, Daniel; Rest, Armin; Jha, S. W.; Pan, Y.-C.; Riess, A. G.; Challis, P.; Chambers, K. C.; Coulter, D. A.; Dettman, K. G.; Foley, M. M.; Fox, O. D.; Huber, M. E.; Jones, D. O.; Kilpatrick, C. D.; Kirshner, R. P.; Schultz, A. S. B.; Siebert, M. R.; Flewelling, H. A.; Gibson, B.; Magnier, E. A.; Miller, J. A.; Primak, N.; Smartt, S. J.; Smith, K. W.; Wainscoat, R. J.; Waters, C.; Willman, M.

    2018-03-01

    The Foundation Supernova Survey aims to provide a large, high-fidelity, homogeneous, and precisely calibrated low-redshift Type Ia supernova (SN Ia) sample for cosmology. The calibration of the current low-redshift SN sample is the largest component of systematic uncertainties for SN cosmology, and new data are necessary to make progress. We present the motivation, survey design, observation strategy, implementation, and first results for the Foundation Supernova Survey. We are using the Pan-STARRS telescope to obtain photometry for up to 800 SNe Ia at z ≲ 0.1. This strategy has several unique advantages: (1) the Pan-STARRS system is a superbly calibrated telescopic system, (2) Pan-STARRS has observed 3/4 of the sky in grizyP1 making future template observations unnecessary, (3) we have a well-tested data-reduction pipeline, and (4) we have observed ˜3000 high-redshift SNe Ia on this system. Here, we present our initial sample of 225 SN Ia grizP1 light curves, of which 180 pass all criteria for inclusion in a cosmological sample. The Foundation Supernova Survey already contains more cosmologically useful SNe Ia than all other published low-redshift SN Ia samples combined. We expect that the systematic uncertainties for the Foundation Supernova Sample will be two to three times smaller than other low-redshift samples. We find that our cosmologically useful sample has an intrinsic scatter of 0.111 mag, smaller than other low-redshift samples. We perform detailed simulations showing that simply replacing the current low-redshift SN Ia sample with an equally sized Foundation sample will improve the precision on the dark energy equation-of-state parameter by 35 per cent, and the dark energy figure of merit by 72 per cent.

  12. Research on the performance of sand-based environmental-friendly water permeable bricks

    NASA Astrophysics Data System (ADS)

    Cai, Runze; Mandula; Chai, Jinyi

    2018-02-01

    This paper examines the effects of the amount of admixture, the water cement ratio, the aggregate grading, and the cement aggregate ratio on the mechanical service properties and of porous concrete pavement bricks including strength, water permeability, frost resistance, and wear resistance. The admixture can enhance the performance of water permeable brick, and optimize the design mix. Experiments are conducted to determine the optimal mixing ratios which are given as; (1) the admixture (self-developed) within the content of 5% of the cement quality; (2) water-cement ratio equal to 0.34; (3) cement-aggregate ratio equal to 0.25; (4) fine aggregate of 70% (particle size 0.6-2.36mm); and coarse aggregate of 30% (particle size: 2.36-4.75mm). The experimental results that the sand-based permeable concrete pavement brick has a strength of 35.6MPa and that the water permeability coefficient is equal to 3.5×10-2cm/s. In addition, it was found that the concrete water permeable brick has good frost resistance and surface wear resistance, and that the its production costs are much lower than the similar sand-based water permeable bricks in China.

  13. A Luminosity Function of Ly(alpha)-Emitting Galaxies at Z [Approx. Equal to] 4.5(Sup 1),(Sup 2)

    NASA Technical Reports Server (NTRS)

    Dawson, Steve; Rhoads, James E.; Malhotra, Sangeeta; Stern, Daniel; Wang, JunXian; Dey, Arjun; Spinrad, Hyron; Jannuzi, Buell T.

    2007-01-01

    We present a catalog of 59 z [approx. equal to] 4:5 Ly(alpha)-emitting galaxies spectroscopically confirmed in a campaign of Keck DEIMOS follow-up observations to candidates selected in the Large Are (LALA) narrowband imaging survey.We targeted 97 candidates for spectroscopic follow-up; by accounting for the variety of conditions under which we performed spectroscopy, we estimate a selection reliability of approx.76%. Together with our previous sample of Keck LRIS confirmations, the 59 sources confirmed herein bring the total catalog to 73 spectroscopically confirmed z [approx. equal to] 4:5 Ly(alpha)- emitting galaxies in the [approx. equal to] 0.7 deg(exp 2) covered by the LALA imaging. As with the Keck LRIS sample, we find that a nonnegligible fraction of the co rest-frame equivalent widths (W(sub lambda)(sup rest)) that exceed the maximum predicted for normal stellar populations: 17%-31%(93%confidence) of the detected galaxies show (W(sub lambda)(sup rest)) 12%-27% (90% confidence) show (W(sub lambda)(sup rest)) > 240 A. We construct a luminosity function of z [approx. equal to] 4.5 Ly(alpha) emission lines for comparison to Ly(alpha) luminosity function < 6.6. We find no significant evidence for Ly(alpha) luminosity function evolution from z [approx. equal to] 3 to z [approx. equal to] 6. This result supports the conclusion that the intergalactic me largely reionized from the local universe out to z [approx. equal to] 6.5. It is somewhat at odds with the pronounced drop in the cosmic star formation rate density recently measured between z approx. 3 an z approx. 6 in continuum-selected Lyman-break galaxies, and therefore potentially sheds light on the relationship between the two populations.

  14. Effects of meal size, clutch, and metabolism on the energy efficiencies of juvenile Burmese pythons, Python molurus.

    PubMed

    Cox, Christian L; Secor, Stephen M

    2007-12-01

    We explored meal size and clutch (i.e., genetic) effects on the relative proportion of ingested energy that is absorbed by the gut (apparent digestive efficiency), becomes available for metabolism and growth (apparent assimilation efficiency), and is used for growth (production efficiency) for juvenile Burmese pythons (Python molurus). Sibling pythons were fed rodent meals equaling 15%, 25%, and 35% of their body mass and individuals from five different clutches were fed rodent meals equaling 25% of their body mass. For each of 11-12 consecutive feeding trials, python body mass was recorded and feces and urate of each snake was collected, dried, and weighed. Energy contents of meals (mice and rats), feces, urate, and pythons were determined using bomb calorimetry. For siblings fed three different meal sizes, growth rate increased with larger meals, but there was no significant variation among the meal sizes for any of the calculated energy efficiencies. Among the three meal sizes, apparent digestive efficiency, apparent assimilation efficiency, and production efficiency averaged 91.0%, 84.7%, and 40.7%, respectively. In contrast, each of these energy efficiencies varied significantly among the five different clutches. Among these clutches production efficiency was negatively correlated with standard metabolic rate (SMR). Clutches containing individuals with low SMR were therefore able to allocate more of ingested energy into growth.

  15. Proteomic Retrieval from Nucleic Acid Depleted Space-Flown Human Cells

    NASA Technical Reports Server (NTRS)

    Hammond, D. K.; Elliott, T. F.; Holubec, K.; Baker, T. L.; Allen, P. L.; Hammond, T. G.; Love, J. E.

    2006-01-01

    Compared to experiments utilizing humans in microgravity, cell-based approaches to questions about subsystems of the human system afford multiple advantages, such as crew safety and the ability to achieve statistical significance. To maximize the science return from flight samples, an optimized method was developed to recover protein from samples depleted of nucleic acid. This technique allows multiple analyses on a single cellular sample and when applied to future cellular investigations could accelerate solutions to significant biomedical barriers to human space exploration. Cell cultures grown in American Fluoroseal bags were treated with an RNA stabilizing agent (RNAlater - Ambion), which enabled both RNA and immunoreactive protein analyses. RNA was purified using an RNAqueous(registered TradeMark) kit (Ambion) and the remaining RNA free supernatant was precipitated with 5% trichloroacetic acid. The precipitate was dissolved in SDS running buffer and tested for protein content using a bicinchoninic acid assay (1) (Sigma). Equal loads of protein were placed on SDS-PAGE gels and either stained with CyproOrange (Amersham) or transferred using Western Blotting techniques (2,3,4). Protein recovered from RNAlater-treated cells and stained with protein stain, was measured using Imagequant volume measurements for rectangles of equal size. BSA treated in this way gave quantitative data over the protein range used (Fig 1). Human renal cortical epithelial (HRCE) cells (5,6,7) grown onboard the International Space Station (ISS) during Increment 3 and in ground control cultures exhibited similar immunoreactivity profiles for antibodies to the Vitamin D receptor (VDR) (Fig 2), the beta isoform of protein kinase C (PKC ) (Fig 3), and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (Fig 4). Parallel immunohistochemical studies on formalin-fixed flight and ground control cultures also showed positive immunostaining for VDR and other biomarkers (Fig 5). These results are consistent with data from additional antigenic recovery experiments performed on human Mullerian tumor cells cultured in microgravity (8).

  16. Mutagenicity and Lung Toxicity of Smoldering vs. Flaming Emissions from Various Biomass Fuels: Implications for Health Effects from Wildland Fires.

    PubMed

    Kim, Yong Ho; Warren, Sarah H; Krantz, Q Todd; King, Charly; Jaskot, Richard; Preston, William T; George, Barbara J; Hays, Michael D; Landis, Matthew S; Higuchi, Mark; DeMarini, David M; Gilmour, M Ian

    2018-01-24

    The increasing size and frequency of wildland fires are leading to greater potential for cardiopulmonary disease and cancer in exposed populations; however, little is known about how the types of fuel and combustion phases affect these adverse outcomes. We evaluated the mutagenicity and lung toxicity of particulate matter (PM) from flaming vs. smoldering phases of five biomass fuels, and compared results by equal mass or emission factors (EFs) derived from amount of fuel consumed. A quartz-tube furnace coupled to a multistage cryotrap was employed to collect smoke condensate from flaming and smoldering combustion of red oak, peat, pine needles, pine, and eucalyptus. Samples were analyzed chemically and assessed for acute lung toxicity in mice and mutagenicity in Salmonella . The average combustion efficiency was 73 and 98% for the smoldering and flaming phases, respectively. On an equal mass basis, PM from eucalyptus and peat burned under flaming conditions induced significant lung toxicity potencies (neutrophil/mass of PM) compared to smoldering PM, whereas high levels of mutagenicity potencies were observed for flaming pine and peat PM compared to smoldering PM. When effects were adjusted for EF, the smoldering eucalyptus PM had the highest lung toxicity EF (neutrophil/mass of fuel burned), whereas smoldering pine and pine needles had the highest mutagenicity EF. These latter values were approximately 5, 10, and 30 times greater than those reported for open burning of agricultural plastic, woodburning cookstoves, and some municipal waste combustors, respectively. PM from different fuels and combustion phases have appreciable differences in lung toxic and mutagenic potency, and on a mass basis, flaming samples are more active, whereas smoldering samples have greater effect when EFs are taken into account. Knowledge of the differential toxicity of biomass emissions will contribute to more accurate hazard assessment of biomass smoke exposures. https://doi.org/10.1289/EHP2200.

  17. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  18. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  19. 40 CFR 86.1309-90 - Exhaust gas sampling system; Otto-cycle and non-petroleum-fueled engines.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., shall exceed either 2.5 mg/l or a concentration equal to 25 times the limit of detection for the HPLC..., shall exceed either 2.5 mg/l or a concentration equal to 25 times the limit of detection for the HPLC...

  20. 40 CFR 86.1309-90 - Exhaust gas sampling system; Otto-cycle and non-petroleum-fueled engines.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., shall exceed either 2.5 mg/l or a concentration equal to 25 times the limit of detection for the HPLC..., shall exceed either 2.5 mg/l or a concentration equal to 25 times the limit of detection for the HPLC...

  1. Cantico Delle Creature: A microtonal original composition for soprano and string quartet to a text by St. Francis of Assisi, including analytical commentary

    NASA Astrophysics Data System (ADS)

    Sabol, Jason A.

    Cantico delle Creature is an original piece of music for soprano and string quartet composed in 72 tone per octave equal temperament, dividing each semitone into six equal parts called twelfth-tones. This system of tuning makes it possible to combine just intonation and spectral principles based on the harmonic series with real imitation, modulation, and polyphony. Supplemental text discusses several aspects of microtonal structure and pedagogy, including the representation of the first 64 partials of the harmonic series in 72 tone equal temperament, performance of natural string harmonics, the relationship between interval size and vibration ratio, pitch to frequency conversion, and analysis of several passages in the musical score.

  2. Types and origin of dolostones in the Lower Palaeozoic of the North China Platform

    NASA Astrophysics Data System (ADS)

    Zengzhao, Feng; Zhenkui, Jin

    1994-11-01

    Dolostones are very common in the Lower Palaeozoic of the North China Platform. They can be divided into two large groups: mud-silt-sized crystalline dolostones and saccharoid dolostones. The former can be further divided into gypsiferous and nongypsiferous mud-silt-sized crystalline dolostones and the latter into equal-sized and unequal-sized saccharoid dolostones. Gypsiferous, mud-silt-sized, crystalline dolostones are well laminated, show bird's-eyes and mudcracks, and contain gypsum crystals or nodules. Their δ 13C is +0.42‰ to +2.21‰, and δ 18O is -6.01‰ to -4.77‰ (PDB standard). These dolostones are similar sedimentologically to the sabkha penecontemporaneous dolostones in the Persian Gulf and were formed in supratidal flats by hypersaline sea water in arid conditions. Nongypsiferous, mud-silt-sized, crystalline dolostones are similar to the gypsiferous ones in texture and structure but do not contain gypsum. Their δ 13C is -3.69‰ to +3.41‰, and δ 18O is -8.17‰ to -4.04‰. They are similar to the supratidal penecontemporaneous dolostones on the Bahamian Platform and were formed in supratidal flats by hypersaline sea water in humid conditions. Equal-sized saccharoid dolostones are composed of dolomites of approximately the same size. Their δ 13C is -2.11‰ to +2.10‰, and δ 18O is -9.33‰ to -4.09‰. These dolostones mainly resulted from dorag dolomitization. Unequal-sized saccharoid dolostones are composed of dolomites of greatly different sizes. Their δ 13C is -4.72 to -1.08, and δ 18O is -9.27‰ to -7.32‰ . These dolostones resulted from the recrystallization of earlier dolostones. The reservoir characteristics of dolostones are affected by many factors. Production practice shows that non-clayey silt-sized crystalline dolostones are the best dolostone reservoir rocks.

  3. 16 CFR 300.10 - Disclosure of information on labels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... type or lettering of equal size and conspicuousness. (b) Subject to the provisions of § 300.8, any non-required information or representations placed on the product shall not minimize, detract from, or conflict...

  4. 24 CFR 3282.413 - Replacement or repurchase of manufactured home from purchaser.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... home be replaced by the manufacturer with a manufactured home substantially equal in size, equipment... to the attention of the Secretary. (c) In those situations where, under contract or other applicable...

  5. Comparison of tire and road wear particle concentrations in sediment for watersheds in France, Japan, and the United States by quantitative pyrolysis GC/MS analysis.

    PubMed

    Unice, Ken M; Kreider, Marisa L; Panko, Julie M

    2013-08-06

    Impacts of surface runoff to aquatic species are an ongoing area of concern. Tire and road wear particles (TRWP) are a constituent of runoff, and determining accurate TRWP concentrations in sediment is necessary in order to evaluate the likelihood that these particles present a risk to the aquatic environment. TRWP consist of approximately equal mass fractions of tire tread rubber and road surface mineral encrustations. Sampling was completed in the Seine (France), Chesapeake (U.S.), and Yodo-Lake Biwa (Japan) watersheds to quantify TRWP in the surficial sediment of watersheds characterized by a wide diversity of population densities and land uses. By using a novel quantitative pyrolysis-GC/MS analysis for rubber polymer, we detected TRWP in 97% of the 149 sediment samples collected. The mean concentrations of TRWP were 4500 (n = 49; range = 62-11 600), 910 (n = 50; range = 50-4400) and 770 (n = 50; range = 26-4600) μg/g d.w. for the characterized portions of the Seine, Chesapeake and Yodo-Lake Biwa watersheds, respectively. A subset of samples from the watersheds (n = 45) was pooled to evaluate TRWP metals, grain size and organic carbon correlations by principal components analysis (PCA), which indicated that four components explain 90% of the variance. The PCA components appeared to correspond to (1) metal alloys possibly from brake wear (primarily Cu, Pb, Zn), (2) crustal minerals (primarily Al, V, Fe), (3) metals mediated by microbial immobilization (primarily Co, Mn, Fe with TOC), and (4) TRWP and other particulate deposition (primarily TRWP with grain size and TOC). This study should provide useful information for assessing potential aquatic effects related to tire service life.

  6. FSR: feature set reduction for scalable and accurate multi-class cancer subtype classification based on copy number.

    PubMed

    Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam

    2012-01-15

    Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.

  7. Cytotoxicity of ZnO Nanoparticles Can Be Tailored by Modifying Their Surface Structure: A Green Chemistry Approach for Safer Nanomaterials

    PubMed Central

    2015-01-01

    ZnO nanoparticles (NP) are extensively used in numerous nanotechnology applications; however, they also happen to be one of the most toxic nanomaterials. This raises significant environmental and health concerns and calls for the need to develop new synthetic approaches to produce safer ZnO NP, while preserving their attractive optical, electronic, and structural properties. In this work, we demonstrate that the cytotoxicity of ZnO NP can be tailored by modifying their surface-bound chemical groups, while maintaining the core ZnO structure and related properties. Two equally sized (9.26 ± 0.11 nm) ZnO NP samples were synthesized from the same zinc acetate precursor using a forced hydrolysis process, and their surface chemical structures were modified by using different reaction solvents. X-ray diffraction and optical studies showed that the lattice parameters, optical properties, and band gap (3.44 eV) of the two ZnO NP samples were similar. However, FTIR spectroscopy showed significant differences in the surface structures and surface-bound chemical groups. This led to major differences in the zeta potential, hydrodynamic size, photocatalytic rate constant, and more importantly, their cytotoxic effects on Hut-78 cancer cells. The ZnO NP sample with the higher zeta potential and catalytic activity displayed a 1.5-fold stronger cytotoxic effect on cancer cells. These results suggest that by modifying the synthesis parameters/conditions and the surface chemical structures of the nanocrystals, their surface charge density, catalytic activity, and cytotoxicity can be tailored. This provides a green chemistry approach to produce safer ZnO NP. PMID:25068096

  8. THE ZURICH ENVIRONMENTAL STUDY (ZENS) OF GALAXIES IN GROUPS ALONG THE COSMIC WEB. V. PROPERTIES AND FREQUENCY OF MERGING SATELLITES AND CENTRALS IN DIFFERENT ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pipino, A.; Cibinel, A.; Tacchella, S.

    2014-12-20

    We use the Zurich Environmental Study database to investigate the environmental dependence of the merger fraction Γ and merging galaxy properties in a sample of ∼1300 group galaxies with M > 10{sup 9.2} M {sub ☉} and 0.05 < z < 0.0585. In all galaxy mass bins investigated in our study, we find that Γ decreases by a factor of ∼2-3 in groups with halo masses M {sub HALO} > 10{sup 13.5} M {sub ☉} relative to less massive systems, indicating a suppression of merger activity in large potential wells. In the fiducial case of relaxed groups only, we measuremore » a variation of ΔΓ/Δlog (M {sub HALO}) ∼ –0.07 dex{sup –1}, which is almost independent of galaxy mass and merger stage. At galaxy masses >10{sup 10.2} M {sub ☉}, most mergers are dry accretions of quenched satellites onto quenched centrals, leading to a strong increase of Γ with decreasing group-centric distance at these mass scales. Both satellite and central galaxies in these high-mass mergers do not differ in color and structural properties from a control sample of nonmerging galaxies of equal mass and rank. At galaxy masses of <10{sup 10.2} M {sub ☉} where we mostly probe satellite-satellite pairs and mergers between star-forming systems close pairs (projected distance <10-20 kpc) show instead ∼2 × enhanced (specific) star formation rates and ∼1.5 × larger sizes than similar mass, nonmerging satellites. The increase in both size and star formation rate leads to similar surface star formation densities in the merging and control-sample satellite populations.« less

  9. A primer on trace metal-sediment chemistry

    USGS Publications Warehouse

    Horowitz, Arthur J.

    1985-01-01

    In most aquatic systems, concentrations of trace metals in suspended sediment and the top few centimeters of bottom sediment are far greater than concentrations of trace metals dissolved in the water column. Consequently, the distribution, transport, and availability of these constituents can not be intelligently evaluated, nor can their environmental impact be determined or predicted solely through the sampling and analysis of dissolved phases. This Primer is designed to acquaint the reader with the basic principles that govern the concentration and distribution of trace metals associated with bottom and suspended sediments. The sampling and analysis of suspended and bottom sediments are very important for monitoring studies, not only because trace metal concentrations associated with them are orders of magnitude higher than in the dissolved phase, but also because of several other factors. Riverine transport of trace metals is dominated by sediment. In addition, bottom sediments serve as a source for suspended sediment and can provide a historical record of chemical conditions. This record will help establish area baseline metal levels against which existing conditions can be compared. Many physical and chemical factors affect a sediment's capacity to collect and concentrate trace metals. The physical factors include grain size, surface area, surface charge, cation exchange capacity, composition, and so forth. Increases in metal concentrations are strongly correlated with decreasing grain size and increasing surface area, surface charge, cation exchange capacity, and increasing concentrations of iron and manganese oxides, organic matter, and clay minerals. Chemical factors are equally important, especially for differentiating between samples having similar bulk chemistries and for inferring or predicting environmental availability. Chemical factors entail phase associations (with such sedimentary components as interstitial water, sulfides, carbonates, and organic matter) and ways in which the metals are entrained by the sediments (such as adsorption, complexation, and within mineral lattices).

  10. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  11. Proof of concept demonstration of optimal composite MRI endpoints for clinical trials.

    PubMed

    Edland, Steven D; Ard, M Colin; Sridhar, Jaiashre; Cobia, Derin; Martersteck, Adam; Mesulam, M Marsel; Rogalski, Emily J

    2016-09-01

    Atrophy measures derived from structural MRI are promising outcome measures for early phase clinical trials, especially for rare diseases such as primary progressive aphasia (PPA), where the small available subject pool limits our ability to perform meaningfully powered trials with traditional cognitive and functional outcome measures. We investigated a composite atrophy index in 26 PPA participants with longitudinal MRIs separated by two years. Rogalski et al . [ Neurology 2014;83:1184-1191] previously demonstrated that atrophy of the left perisylvian temporal cortex (PSTC) is a highly sensitive measure of disease progression in this population and a promising endpoint for clinical trials. Using methods described by Ard et al . [ Pharmaceutical Statistics 2015;14:418-426], we constructed a composite atrophy index composed of a weighted sum of volumetric measures of 10 regions of interest within the left perisylvian cortex using weights that maximize signal-to-noise and minimize sample size required of trials using the resulting score. Sample size required to detect a fixed percentage slowing in atrophy in a two-year clinical trial with equal allocation of subjects across arms and 90% power was calculated for the PSTC and optimal composite surrogate biomarker endpoints. The optimal composite endpoint required 38% fewer subjects to detect the same percent slowing in atrophy than required by the left PSTC endpoint. Optimal composites can increase the power of clinical trials and increase the probability that smaller trials are informative, an observation especially relevant for PPA, but also for related neurodegenerative disorders including Alzheimer's disease.

  12. Lattice dynamics and electron-phonon coupling calculations using nondiagonal supercells

    NASA Astrophysics Data System (ADS)

    Lloyd-Williams, Jonathan; Monserrat, Bartomeu

    Quantities derived from electron-phonon coupling matrix elements require a fine sampling of the vibrational Brillouin zone. Converged results are typically not obtainable using the direct method, in which a perturbation is frozen into the system and the total energy derivatives are calculated using a finite difference approach, because the size of simulation cell needed is prohibitively large. We show that it is possible to determine the response of a periodic system to a perturbation characterized by a wave vector with reduced fractional coordinates (m1 /n1 ,m2 /n2 ,m3 /n3) using a supercell containing a number of primitive cells equal to the least common multiple of n1, n2, and n3. This is accomplished by utilizing supercell matrices containing nonzero off-diagonal elements. We present the results of electron-phonon coupling calculations using the direct method to sample the vibrational Brillouin zone with grids of unprecedented size for a range of systems, including the canonical example of diamond. We also demonstrate that the use of nondiagonal supercells reduces by over an order of magnitude the computational cost of obtaining converged vibrational densities of states and phonon dispersion curves. J.L.-W. is supported by the Engineering and Physical Sciences Research Council (EPSRC). B.M. is supported by Robinson College, Cambridge, and the Cambridge Philosophical Society. This work was supported by EPSRC Grants EP/J017639/1 and EP/K013564/1.

  13. Wrinkling crystallography on spherical surfaces

    PubMed Central

    Brojan, Miha; Terwagne, Denis; Lagrange, Romain; Reis, Pedro M.

    2015-01-01

    We present the results of an experimental investigation on the crystallography of the dimpled patterns obtained through wrinkling of a curved elastic system. Our macroscopic samples comprise a thin hemispherical shell bound to an equally curved compliant substrate. Under compression, a crystalline pattern of dimples self-organizes on the surface of the shell. Stresses are relaxed by both out-of-surface buckling and the emergence of defects in the quasi-hexagonal pattern. Three-dimensional scanning is used to digitize the topography. Regarding the dimples as point-like packing units produces spherical Voronoi tessellations with cells that are polydisperse and distorted, away from their regular shapes. We analyze the structure of crystalline defects, as a function of system size. Disclinations are observed and, above a threshold value, dislocations proliferate rapidly with system size. Our samples exhibit striking similarities with other curved crystals of charged particles and colloids. Differences are also found and attributed to the far-from-equilibrium nature of our patterns due to the random and initially frozen material imperfections which act as nucleation points, the presence of a physical boundary which represents an additional source of stress, and the inability of dimples to rearrange during crystallization. Even if we do not have access to the exact form of the interdimple interaction, our experiments suggest a broader generality of previous results of curved crystallography and their robustness on the details of the interaction potential. Furthermore, our findings open the door to future studies on curved crystals far from equilibrium. PMID:25535355

  14. The population of faint Jupiter family comets near the Earth

    NASA Astrophysics Data System (ADS)

    Fernández, Julio A.; Morbidelli, Alessandro

    2006-11-01

    We study the population of faint Jupiter family comets (JFCs) that approach the Earth (perihelion distances q<1.3 AU) by applying a debiasing technique to the observed sample. We found for the debiased cumulative luminosity function (CLF) of absolute total magnitudes H a bimodal distribution in which brighter comets ( H≲9) follow a linear relation with a steep slope α=0.65±0.14, while fainter comets follow a much shallower slope α=0.25±0.06 down to H˜18. The slope can be pushed up to α=0.35±0.09 if a second break in the H distribution to a much shallower slope is introduced at H˜16. We estimate a population of about 10 3 faint JFCs with q<1.3 AU and 10

  15. Evolution of efficient methods to sample lead sources, such as house dust and hand dust, in the homes of children

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Que Hee, S.S.; Peace, B.; Clark, C.S.

    Efficient sampling methods to recover lead-containing house dust and hand dust have been evolved so that sufficient lead is collected for analysis and to ensure that correlational analyses linking these two parameters to blood lead are not dependent on the efficiency of sampling. Precise collection of loose house dust from a 1-unit area (484 cmS) with a Tygon or stainless steel sampling tube connected to a portable sampling pump (1.2 to 2.5 liters/min) required repetitive sampling (three times). The Tygon tube sampling technique for loose house dust <177 m in diameter was around 72% efficient with respect to dust weightmore » and lead collection. A representative house dust contained 81% of its total weight in this fraction. A single handwipe for applied loose hand dust was not acceptably efficient or precise, and at least three wipes were necessary to achieve recoveries of >80% of the lead applied. House dusts of different particle sizes <246 m adhered equally well to hands. Analysis of lead-containing material usually required at least three digestions/decantations using hot plate or microwave techniques to allow at least 90% of the lead to be recovered. It was recommended that other investigators validate their handwiping, house dust sampling, and digestion techniques to facilitate comparison of results across studies. The final methodology for the Cincinnati longitudinal study was three sampling passes for surface dust using a stainless steel sampling tube; three microwave digestion/decantations for analysis of dust and paint; and three wipes with handwipes with one digestion/decantation for the analysis of six handwipes together.« less

  16. Phase diagram of heteronuclear Janus dumbbells

    NASA Astrophysics Data System (ADS)

    O'Toole, Patrick; Giacometti, Achille; Hudson, Toby

    Using Aggregation-Volume-Bias Monte Carlo simulations along with Successive Umbrella Sampling and Histogram Re-weighting, we study the phase diagram of a system of dumbbells formed by two touching spheres having variable sizes, as well as different interaction properties. The first sphere ($h$) interacts with all other spheres belonging to different dumbbells with a hard-sphere potential. The second sphere ($s$) interacts via a square-well interaction with other $s$ spheres belonging to different dumbbells and with a hard-sphere potential with all remaining $h$ spheres. We focus on the region where the $s$ sphere is larger than the $h$ sphere, as measured by a parameter $1\\le \\alpha\\le 2 $ controlling the relative size of the two spheres. As $\\alpha \\to 2$ a simple fluid of square-well spheres is recovered, whereas $\\alpha \\to 1$ corresponds to the Janus dumbbell limit, where the $h$ and $s$ spheres have equal sizes. Many phase diagrams falling into three classes are observed, depending on the value of $\\alpha$. The $1.8 \\le \\alpha \\le 2$ is dominated by a gas-liquid phase separation very similar to that of a pure square-well fluid with varied critical temperature and density. When $1.3 \\le \\alpha \\le 1.8$ we find a progressive destabilization of the gas-liquid phase diagram by the onset of self-assembled structures, that eventually lead to a metastability of the gas-liquid transition below $\\alpha=1.2$.

  17. Application of field dependent polynomial model

    NASA Astrophysics Data System (ADS)

    Janout, Petr; Páta, Petr; Skala, Petr; Fliegel, Karel; Vítek, Stanislav; Bednář, Jan

    2016-09-01

    Extremely wide-field imaging systems have many advantages regarding large display scenes whether for use in microscopy, all sky cameras, or in security technologies. The Large viewing angle is paid by the amount of aberrations, which are included with these imaging systems. Modeling wavefront aberrations using the Zernike polynomials is known a longer time and is widely used. Our method does not model system aberrations in a way of modeling wavefront, but directly modeling of aberration Point Spread Function of used imaging system. This is a very complicated task, and with conventional methods, it was difficult to achieve the desired accuracy. Our optimization techniques of searching coefficients space-variant Zernike polynomials can be described as a comprehensive model for ultra-wide-field imaging systems. The advantage of this model is that the model describes the whole space-variant system, unlike the majority models which are partly invariant systems. The issue that this model is the attempt to equalize the size of the modeled Point Spread Function, which is comparable to the pixel size. Issues associated with sampling, pixel size, pixel sensitivity profile must be taken into account in the design. The model was verified in a series of laboratory test patterns, test images of laboratory light sources and consequently on real images obtained by an extremely wide-field imaging system WILLIAM. Results of modeling of this system are listed in this article.

  18. Ultrafast axial scanning for two-photon microscopy via a digital micromirror device and binary holography.

    PubMed

    Cheng, Jiyi; Gu, Chenglin; Zhang, Dapeng; Wang, Dien; Chen, Shih-Chi

    2016-04-01

    In this Letter, we present an ultrafast nonmechanical axial scanning method for two-photon excitation (TPE) microscopy based on binary holography using a digital micromirror device (DMD), achieving a scanning rate of 4.2 kHz, scanning range of ∼180  μm, and scanning resolution (minimum step size) of ∼270  nm. Axial scanning is achieved by projecting the femtosecond laser to a DMD programmed with binary holograms of spherical wavefronts of increasing/decreasing radii. To guide the scanner design, we have derived the parametric relationships between the DMD parameters (i.e., aperture and pixel size), and the axial scanning characteristics, including (1) maximum optical power, (2) minimum step size, and (3) scan range. To verify the results, the DMD scanner is integrated with a custom-built TPE microscope that operates at 60 frames per second. In the experiment, we scanned a pollen sample via both the DMD scanner and a precision z-stage. The results show the DMD scanner generates images of equal quality throughout the scanning range. The overall efficiency of the TPE system was measured to be ∼3%. With the high scanning rate, the DMD scanner may find important applications in random-access imaging or high-speed volumetric imaging that enables visualization of highly dynamic biological processes in 3D with submillisecond temporal resolution.

  19. Gold Nanoparticle Labels Amplify Ellipsometric Signals

    NASA Technical Reports Server (NTRS)

    Venkatasubbarao, Srivatsa

    2008-01-01

    The ellipsometric method reported in the immediately preceding article was developed in conjunction with a method of using gold nanoparticles as labels on biomolecules that one seeks to detect. The purpose of the labeling is to exploit the optical properties of the gold nanoparticles in order to amplify the measurable ellipsometric effects and thereby to enable ultrasensitive detection of the labeled biomolecules without need to develop more-complex ellipsometric instrumentation. The colorimetric, polarization, light-scattering, and other optical properties of nanoparticles depend on their sizes and shapes. In the present method, these size-and-shape-dependent properties are used to magnify the polarization of scattered light and the diattenuation and retardance of signals derived from ellipsometry. The size-and-shape-dependent optical properties of the nanoparticles make it possible to interrogate the nanoparticles by use of light of various wavelengths, as appropriate, to optimally detect particles of a specific type at high sensitivity. Hence, by incorporating gold nanoparticles bound to biomolecules as primary or secondary labels, the performance of ellipsometry as a means of detecting the biomolecules can be improved. The use of gold nanoparticles as labels in ellipsometry has been found to afford sensitivity that equals or exceeds the sensitivity achieved by use of fluorescence-based methods. Potential applications for ellipsometric detection of gold nanoparticle-labeled biomolecules include monitoring molecules of interest in biological samples, in-vitro diagnostics, process monitoring, general environmental monitoring, and detection of biohazards.

  20. Binary Mixtures of Particles with Different Diffusivities Demix.

    PubMed

    Weber, Simon N; Weber, Christoph A; Frey, Erwin

    2016-02-05

    The influence of size differences, shape, mass, and persistent motion on phase separation in binary mixtures has been intensively studied. Here we focus on the exclusive role of diffusivity differences in binary mixtures of equal-sized particles. We find an effective attraction between the less diffusive particles, which are essentially caged in the surrounding species with the higher diffusion constant. This effect leads to phase separation for systems above a critical size: A single close-packed cluster made up of the less diffusive species emerges. Experiments for testing our predictions are outlined.

  1. Factors affecting plant growth in membrane nutrient delivery

    NASA Technical Reports Server (NTRS)

    Dreschel, T. W.; Wheeler, R. M.; Sager, J. C.; Knott, W. M.

    1990-01-01

    The development of the tubular membrane plant growth unit for the delivery of water and nutrients to roots in microgravity has recently focused on measuring the effects of changes in physical variables controlling solution availability to the plants. Significant effects of membrane pore size and the negative pressure used to contain the solution were demonstrated. Generally, wheat grew better in units with a larger pore size but equal negative pressure and in units with the same pore size but less negative pressure. Lettuce also exhibited better plant growth at less negative pressure.

  2. Computer simulation of formation and decomposition of Au13 nanoparticles

    NASA Astrophysics Data System (ADS)

    Stishenko, P.; Svalova, A.

    2017-08-01

    To study the Ostwald ripening process of Au13 nanoparticles a two-scale model is constructed: analytical approximation of average nanoparticle energy as function of nanoparticle size and structural motive, and the Monte Carlo model of 1000 particles ensemble. Simulation results show different behavior of particles of different structural motives. The change of the distributions of atom coordination numbers during the Ostwald ripening process was observed. The nanoparticles of the equal size and shape with the face-centered cubic structure of the largest sizes appeared to be the most stable.

  3. "Size Matters": Women in High Tech Start-Ups

    NASA Astrophysics Data System (ADS)

    Lackritz, Hilary

    2001-03-01

    For those who want constant excitement, change, and rapid opportunities to have an impact in the technical world, start-up companies offer wonderful challenges. This talk will focus realistically on rewards and risks in the start-up world. An outline of the differences between the high tech start-ups and the academic and consulting worlds from a personal viewpoint will be presented. Size usually does matter, and in this case, small size can equal independence, entrepreneurship, and other advantages that are hard to come by in Dilbert’s corporate world.

  4. Measurement invariance via multigroup SEM: Issues and solutions with chi-square-difference tests.

    PubMed

    Yuan, Ke-Hai; Chan, Wai

    2016-09-01

    Multigroup structural equation modeling (SEM) plays a key role in studying measurement invariance and in group comparison. When population covariance matrices are deemed not equal across groups, the next step to substantiate measurement invariance is to see whether the sample covariance matrices in all the groups can be adequately fitted by the same factor model, called configural invariance. After configural invariance is established, cross-group equalities of factor loadings, error variances, and factor variances-covariances are then examined in sequence. With mean structures, cross-group equalities of intercepts and factor means are also examined. The established rule is that if the statistic at the current model is not significant at the level of .05, one then moves on to testing the next more restricted model using a chi-square-difference statistic. This article argues that such an established rule is unable to control either Type I or Type II errors. Analysis, an example, and Monte Carlo results show why and how chi-square-difference tests are easily misused. The fundamental issue is that chi-square-difference tests are developed under the assumption that the base model is sufficiently close to the population, and a nonsignificant chi-square statistic tells little about how good the model is. To overcome this issue, this article further proposes that null hypothesis testing in multigroup SEM be replaced by equivalence testing, which allows researchers to effectively control the size of misspecification before moving on to testing a more restricted model. R code is also provided to facilitate the applications of equivalence testing for multigroup SEM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Dynamic fractals in spatial evolutionary games

    NASA Astrophysics Data System (ADS)

    Kolotev, Sergei; Malyutin, Aleksandr; Burovski, Evgeni; Krashakov, Sergei; Shchur, Lev

    2018-06-01

    We investigate critical properties of a spatial evolutionary game based on the Prisoner's Dilemma. Simulations demonstrate a jump in the component densities accompanied by drastic changes in average sizes of the component clusters. We argue that the cluster boundary is a random fractal. Our simulations are consistent with the fractal dimension of the boundary being equal to 2, and the cluster boundaries are hence asymptotically space filling as the system size increases.

  6. Equal Opportunities and Vocational Training. Qualifications and Educational Needs of Co-Working Spouses of Owners of Small and Medium-Sized Enterprises.

    ERIC Educational Resources Information Center

    Riis-Jorgensen, Karin

    A study examined the training needs of women working in moderate-sized enterprises owned by their husbands. Information collected from interviews with spouses of business owners in Belgium, Denmark, the Federal Republic of Germany, France, and Italy confirmed the original hypothesis that in the kind of enterprise studied it is the man who owns the…

  7. Sub-micron particle sampler apparatus

    DOEpatents

    Gay, Don D.; McMillan, William G.

    1987-01-01

    Apparatus and method steps for collecting sub-micron sized particles include a collection chamber and cryogenic cooling. The cooling is accomplished by coil tubing carrying nitrogen in liquid form, with the liquid nitrogen changing to the gas phase before exiting from the collection chamber in the tubing. Standard filters are used to filter out particles of diameter greater than or equal to 0.3 microns; however the present invention is used to trap particles of less than 0.3 micron in diameter. A blower draws air to said collection chamber through a filter which filters particles with diameters greater than or equal to 0.3 micron. The air is then cryogenically cooled so that moisture and sub-micron sized particles in the air condense into ice on the coil. The coil is then heated so that the ice melts, and the liquid is then drawn off and passed through a Buchner funnel where the liquid is passed through a Nuclepore membrane. A vacuum draws the liquid through the Nuclepore membrane, with the Nuclepore membrane trapping sub-micron sized particles therein. The Nuclepore membrane is then covered on its top and bottom surfaces with sheets of Mylar.RTM. and the assembly is then crushed into a pellet. This effectively traps the sub-micron sized particles for later analysis.

  8. Urinary Incontinence of Women in a Nationwide Study in Sri Lanka: Prevalence and Risk Factors.

    PubMed

    Pathiraja, Ramya; Prathapan, Shamini; Goonawardena, Sampatha

    2017-05-23

    Urinary incontinence, be stress incontinence or urge incontinence or a mixed type incontinence affects women of all ages. The aim of this study was to describe the prevalence and risk factors of urinary incontinence in Sri Lanka. A community based cross-sectional study was performed in Sri Lanka. The age group of the women in Sri Lanka was categorized into 3 age groups: Less than or equal to 35 years, 36 to 50 years of age and more than or equal to 51 years of age. A sample size of 675 women was obtained from each age category obtaining a total sample of 2025 from Sri Lanka. An interviewer-administered questionnaire consisting of two parts; Socio demographic factors, Medical and Obstetric History, and the King's Health Questionnaire (KHQ), was used for data collection. Stepwise logistic regression analysis was performed. The Prevalence of women with only stress incontinence was 10%, with urge incontinence was 15.6% and with stress and urge incontinence was 29.9%. Stepwise logistic regression analysis showed that the age groups of 36 - 50 years (OR = 2.03; 95% CI = 1.56 - 2.63) and 51 years and above (OR = 2.61; 95% CI= 1.95 - 3.48), Living in one of the districts in Sri Lanka (OR = 4.58; 95% CI = 3.35 - 6.27) and having given birth to multiple children (OR = 1.1; 95% CI = 1.02 - 1.21), diabetes mellitus (OR = 1.97; 95% CI = 1.19 - 3.23), and respiratory diseases (OR = 2.17; 95% CI = 1.48 - 3.19 ) showed a significant risk in the regression analysis. The risk factor, mostly modifiable, if prevented early, could help to reduce the symptoms of urinary incontinence.

  9. Sensitivity of super-efficient data envelopment analysis results to individual decision-making units: an example of surgical workload by specialty.

    PubMed

    Dexter, Franklin; O'Neill, Liam; Xin, Lei; Ledolter, Johannes

    2008-12-01

    We use resampling of data to explore the basic statistical properties of super-efficient data envelopment analysis (DEA) when used as a benchmarking tool by the manager of a single decision-making unit. Our focus is the gaps in the outputs (i.e., slacks adjusted for upward bias), as they reveal which outputs can be increased. The numerical experiments show that the estimates of the gaps fail to exhibit asymptotic consistency, a property expected for standard statistical inference. Specifically, increased sample sizes were not always associated with more accurate forecasts of the output gaps. The baseline DEA's gaps equaled the mode of the jackknife and the mode of resampling with/without replacement from any subset of the population; usually, the baseline DEA's gaps also equaled the median. The quartile deviations of gaps were close to zero when few decision-making units were excluded from the sample and the study unit happened to have few other units contributing to its benchmark. The results for the quartile deviations can be explained in terms of the effective combinations of decision-making units that contribute to the DEA solution. The jackknife can provide all the combinations contributing to the quartile deviation and only needs to be performed for those units that are part of the benchmark set. These results show that there is a strong rationale for examining DEA results with a sensitivity analysis that excludes one benchmark hospital at a time. This analysis enhances the quality of decision support using DEA estimates for the potential ofa decision-making unit to grow one or more of its outputs.

  10. Measuring monotony in two-dimensional samples

    NASA Astrophysics Data System (ADS)

    Kachapova, Farida; Kachapov, Ilias

    2010-04-01

    This note introduces a monotony coefficient as a new measure of the monotone dependence in a two-dimensional sample. Some properties of this measure are derived. In particular, it is shown that the absolute value of the monotony coefficient for a two-dimensional sample is between |r| and 1, where r is the Pearson's correlation coefficient for the sample; that the monotony coefficient equals 1 for any monotone increasing sample and equals -1 for any monotone decreasing sample. This article contains a few examples demonstrating that the monotony coefficient is a more accurate measure of the degree of monotone dependence for a non-linear relationship than the Pearson's, Spearman's and Kendall's correlation coefficients. The monotony coefficient is a tool that can be applied to samples in order to find dependencies between random variables; it is especially useful in finding couples of dependent variables in a big dataset of many variables. Undergraduate students in mathematics and science would benefit from learning and applying this measure of monotone dependence.

  11. Eye shape and the nocturnal bottleneck of mammals.

    PubMed

    Hall, Margaret I; Kamilar, Jason M; Kirk, E Christopher

    2012-12-22

    Most vertebrate groups exhibit eye shapes that vary predictably with activity pattern. Nocturnal vertebrates typically have large corneas relative to eye size as an adaptation for increased visual sensitivity. Conversely, diurnal vertebrates generally demonstrate smaller corneas relative to eye size as an adaptation for increased visual acuity. By contrast, several studies have concluded that many mammals exhibit typical nocturnal eye shapes, regardless of activity pattern. However, a recent study has argued that new statistical methods allow eye shape to accurately predict activity patterns of mammals, including cathemeral species (animals that are equally likely to be awake and active at any time of day or night). Here, we conduct a detailed analysis of eye shape and activity pattern in mammals, using a broad comparative sample of 266 species. We find that the eye shapes of cathemeral mammals completely overlap with nocturnal and diurnal species. Additionally, most diurnal and cathemeral mammals have eye shapes that are most similar to those of nocturnal birds and lizards. The only mammalian clade that diverges from this pattern is anthropoids, which have convergently evolved eye shapes similar to those of diurnal birds and lizards. Our results provide additional evidence for a nocturnal 'bottleneck' in the early evolution of crown mammals.

  12. Some problems associated with the analysis and interpretation of mixed carbonate and quartz beach sands, illustrated by examples from North-West Ireland

    NASA Astrophysics Data System (ADS)

    Carter, R. W. G.

    1982-09-01

    Mixes of carbonate and quartz sands, which are commonly encountered in Recent coastal sediments, require careful analysis if they are to be correctly interpreted. Grain-size data fall into multimodal or segmented zig-zag distributions which may require some kind of component separation if they are to be summarised by conventional statistics, and before they can be assigned to a particular hydrodynamic depositional process or environment. Unfortunately, separation techniques are only spasmodically applied, usually without due regard to the consequences. Such artificially filtered or truncated distributions are of little subsequent use. Using a range of samples from two beaches in NW Ireland, where carbonate proportions range from nearly zero to over sixty percent, the interrelationships of the two dominant components were examined. Where only a small carbonate proportion is incorporated into a large quartz one, predictable modifications of the size-curve are apparent. However, the components are more independent if mixtures are near equal. The occurrence of a number of distinctive combinations of simple straight lines and complex zig-zag curves probably reflects the relatively dynamic nature of the carbonate fraction.

  13. Eye shape and the nocturnal bottleneck of mammals

    PubMed Central

    Hall, Margaret I.; Kamilar, Jason M.; Kirk, E. Christopher

    2012-01-01

    Most vertebrate groups exhibit eye shapes that vary predictably with activity pattern. Nocturnal vertebrates typically have large corneas relative to eye size as an adaptation for increased visual sensitivity. Conversely, diurnal vertebrates generally demonstrate smaller corneas relative to eye size as an adaptation for increased visual acuity. By contrast, several studies have concluded that many mammals exhibit typical nocturnal eye shapes, regardless of activity pattern. However, a recent study has argued that new statistical methods allow eye shape to accurately predict activity patterns of mammals, including cathemeral species (animals that are equally likely to be awake and active at any time of day or night). Here, we conduct a detailed analysis of eye shape and activity pattern in mammals, using a broad comparative sample of 266 species. We find that the eye shapes of cathemeral mammals completely overlap with nocturnal and diurnal species. Additionally, most diurnal and cathemeral mammals have eye shapes that are most similar to those of nocturnal birds and lizards. The only mammalian clade that diverges from this pattern is anthropoids, which have convergently evolved eye shapes similar to those of diurnal birds and lizards. Our results provide additional evidence for a nocturnal ‘bottleneck’ in the early evolution of crown mammals. PMID:23097513

  14. Evaluation of the US Army Institute of Public Health Destination Monitoring Program, a food safety surveillance program.

    PubMed

    Rapp-Santos, Kamala; Havas, Karyn; Vest, Kelly

    2015-01-01

    The Destination Monitoring Program, operated by the US Army Public Health Command (APHC), is one component that supports the APHC Veterinary Service's mission to ensure safety and quality of food procured for the Department of Defense (DoD). This program relies on retail product testing to ensure compliance of production facilities and distributors that supply food to the DoD. This program was assessed to determine the validity and timeliness by specifically evaluating whether sample size of items collected was adequate, if food samples collected were representative of risk, and whether the program returns results in a timely manner. Data was collected from the US Army Veterinary Services Lotus Notes database, including all food samples collected and submitted from APHC Region-North for the purposes of destination monitoring from January 1, 2013 to December 31, 2013. For most food items, only one sample was submitted for testing. The ability to correctly identify a contaminated food lot may be limited by reliance on test results from only one sample, as the level of confidence in a negative test result is low. The food groups most frequently sampled by APHC correlated with the commodities that were implicated in foodborne illness in the United States. Food items to be submitted were equally distributed among districts and branches, but sections within large branches submitted relatively few food samples compared to sections within smaller branches and districts. Finally, laboratory results were not available for about half the food items prior to their respective expiration dates.

  15. Equality Hypocrisy, Inconsistency, and Prejudice: The Unequal Application of the Universal Human Right to Equality.

    PubMed

    Abrams, Dominic; Houston, Diane M; Van de Vyver, Julie; Vasiljevic, Milica

    2015-02-01

    In Western culture, there appears to be widespread endorsement of Article 1 of the Universal Declaration of Human Rights (which stresses equality and freedom). But do people really apply their equality values equally, or are their principles and application systematically discrepant, resulting in equality hypocrisy? The present study, conducted with a representative national sample of adults in the United Kingdom ( N = 2,895), provides the first societal test of whether people apply their value of "equality for all" similarly across multiple types of status minority (women, disabled people, people aged over 70, Blacks, Muslims, and gay people). Drawing on theories of intergroup relations and stereotyping we examined, relation to each of these groups, respondents' judgments of how important it is to satisfy their particular wishes, whether there should be greater or reduced equality of employment opportunities, and feelings of social distance. The data revealed a clear gap between general equality values and responses to these specific measures. Respondents prioritized equality more for "paternalized" groups (targets of benevolent prejudice: women, disabled, over 70) than others (Black people, Muslims, and homosexual people), demonstrating significant inconsistency. Respondents who valued equality more, or who expressed higher internal or external motivation to control prejudice, showed greater consistency in applying equality. However, even respondents who valued equality highly showed significant divergence in their responses to paternalized versus nonpaternalized groups, revealing a degree of hypocrisy. Implications for strategies to promote equality and challenge prejudice are discussed.

  16. Equality Hypocrisy, Inconsistency, and Prejudice: The Unequal Application of the Universal Human Right to Equality

    PubMed Central

    2015-01-01

    In Western culture, there appears to be widespread endorsement of Article 1 of the Universal Declaration of Human Rights (which stresses equality and freedom). But do people really apply their equality values equally, or are their principles and application systematically discrepant, resulting in equality hypocrisy? The present study, conducted with a representative national sample of adults in the United Kingdom (N = 2,895), provides the first societal test of whether people apply their value of “equality for all” similarly across multiple types of status minority (women, disabled people, people aged over 70, Blacks, Muslims, and gay people). Drawing on theories of intergroup relations and stereotyping we examined, relation to each of these groups, respondents’ judgments of how important it is to satisfy their particular wishes, whether there should be greater or reduced equality of employment opportunities, and feelings of social distance. The data revealed a clear gap between general equality values and responses to these specific measures. Respondents prioritized equality more for “paternalized” groups (targets of benevolent prejudice: women, disabled, over 70) than others (Black people, Muslims, and homosexual people), demonstrating significant inconsistency. Respondents who valued equality more, or who expressed higher internal or external motivation to control prejudice, showed greater consistency in applying equality. However, even respondents who valued equality highly showed significant divergence in their responses to paternalized versus nonpaternalized groups, revealing a degree of hypocrisy. Implications for strategies to promote equality and challenge prejudice are discussed. PMID:25914516

  17. The role of drop velocity in statistical spray description

    NASA Technical Reports Server (NTRS)

    Groeneweg, J. F.; El-Wakil, M. M.; Myers, P. S.; Uyehara, O. A.

    1978-01-01

    The justification for describing a spray by treating drop velocity as a random variable on an equal statistical basis with drop size was studied experimentally. A double exposure technique using fluorescent drop photography was used to make size and velocity measurements at selected locations in a steady ethanol spray formed by a swirl atomizer. The size velocity data were categorized to construct bivariate spray density functions to describe the spray immediately after formation and during downstream propagation. Bimodal density functions were formed by environmental interaction during downstream propagation. Large differences were also found between spatial mass density and mass flux size distribution at the same location.

  18. A minimax technique for time-domain design of preset digital equalizers using linear programming

    NASA Technical Reports Server (NTRS)

    Vaughn, G. L.; Houts, R. C.

    1975-01-01

    A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.

  19. Unsupported thin film beam splitter

    NASA Technical Reports Server (NTRS)

    Bastien, R. C.; Scheuerman, R. J.

    1972-01-01

    Multilayer beam splitter system yielding nearly equal broadband infrared reflectance and transmittance in the 5 to 50 micron spectral region has been developed which will significantly reduce size and cost of light path compensating devices in infrared spectral instruments.

  20. Nuclear energy release from fragmentation

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Souza, S. R.; Tsang, M. B.; Zhang, Feng-Shou

    2016-08-01

    It is well known that binary fission occurs with positive energy gain. In this article we examine the energetics of splitting uranium and thorium isotopes into various numbers of fragments (from two to eight) with nearly equal size. We find that the energy released by splitting 230,232Th and 235,238U into three equal size fragments is largest. The statistical multifragmentation model (SMM) is applied to calculate the probability of different breakup channels for excited nuclei. By weighing the probability distributions of fragment multiplicity at different excitation energies, we find the peaks of energy release for 230,232Th and 235,238U are around 0.7-0.75 MeV/u at excitation energy between 1.2 and 2 MeV/u in the primary breakup process. Taking into account the secondary de-excitation processes of primary fragments with the GEMINI code, these energy peaks fall to about 0.45 MeV/u.

  1. Resampling-based Methods in Single and Multiple Testing for Equality of Covariance/Correlation Matrices

    PubMed Central

    Yang, Yang; DeGruttola, Victor

    2016-01-01

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584

  2. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    PubMed

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  3. Aerosol residence times and changes in radioiodine-131I and radiocaesium-137 Cs activity over Central Poland after the Fukushima-Daiichi Nuclear reactor accident.

    PubMed

    Długosz-Lisiecka, Magdalena; Bem, Henryk

    2012-05-01

    The first detectable activities of radioiodine (131)I, and radiocaesium (134)Cs and (137)Cs in the air over Central Poland were measured in dust samples collected by the ASS-500 station in the period of 21(st) to 24(th) of March, 2011. However, the highest activity of both fission products, (131)I and (137)Cs: 8.3 mBq m(-3) and 0.75 mBq m(-3), respectively, were obtained in the samples collected on 30(th) March, i.e.∼18 days after the beginning of the fission products' discharge from the damaged units of the Fukushima Daiichi Nuclear Power Plant. The simultaneously determined corrected aerosol residence time for the same samples by (210)Pb/(210)Bi and (210)Pb/(210)Po methods was equal to 10 days. Additionally, on the basis of the activity ratio of two other natural cosmogenic radionuclides, (7)Be and (22)Na in these aerosol samples, it was possible to estimate the aerosol residence time at ∼150 days for the solid particles coming from the stratospheric fallout. These data, as well as the differences in the activity size distribution of (7)Be and (131)I in the air particulate matter, show, in contrast to the Chernobyl discharge, a negligible input of stratospheric transport of Fukushima-released fission products.

  4. Critical conditions for particle motion in coarse bed materials of nonuniform size distribution

    NASA Astrophysics Data System (ADS)

    Bathurst, James C.

    2013-09-01

    Initiation of particle motion in a bed material of nonuniform size distribution may be quantified by (qci/qcr) = (Di/Dr)b, where qci is the critical unit discharge at which particle size Di enters motion, qcr is the critical condition for a reference size Dr unaffected by the hiding/exposure effects associated with nonuniform size distributions, i and r refer to percentiles of the distribution and b varies from 0 (equal mobility in entrainment of all particle sizes) to 1.5-2.5 (full size selective transport). Currently there is no generally accepted method for predicting the value of b. Flume and field data are therefore combined to investigate the above relationship. Thirty-seven sets of flume data quantify the relationship between critical unit discharge and particle size for bed materials with uniform size distributions (used here to approximate full size selective transport). Field data quantify the relationship for bed materials of nonuniform size distribution at 24 sites, with b ranging from 0.15 to 1.3. Intersection of the two relationships clearly demonstrates the hiding/exposure effect; in some but not all cases, Dr is close to the median size D50. The exponent has two clusters of values: b > 1 for sites subject to episodic rain-fed floods and data collected by bedload pit trap and tracers; and b < 0.7 for sites with seasonal snowmelt/glacial melt flow regimes and data collected by bedload sampler and large aperture trap. Field technique appears unlikely to cause variations in b of more than about 0.25. However, the clustering is consistent with possible variations in bed structure distinguishing: for b > 1, sites with relatively infrequent bedload transport where particle embedding and consolidation could reduce the mobility of coarser particles; and, for b < 0.7, a looser bed structure with frequent transport events allowing hiding/exposure and size selection effects to achieve their balance. As yet there is no firm evidence for such a dependency on bed structure but variations in b could potentially be caused by factors outside those determining equal mobility or size selection but appearing to affect b in the same way.

  5. Fermentation pH Modulates the Size Distributions and Functional Properties of Gluconobacter albidus TMW 2.1191 Levan

    PubMed Central

    Ua-Arak, Tharalinee; Jakob, Frank; Vogel, Rudi F.

    2017-01-01

    Bacterial levan has gained an increasing interest over the last decades due to its unique characteristics and multiple possible applications. Levan and other exopolysaccharides (EPSs) production are usually optimized to obtain the highest concentration or yield while a possible change of the molecular size and mass during the production process is mostly neglected. In this study, the molar mass and radius of levan samples were monitored during fermentations with the food-grade, levan-producing acetic acid bacterium Gluconobacter (G.) albidus TMW 2.1191 in shake flasks (without pH control) and bioreactors (with pH control at 4.5, 5.5 and 6.5, respectively). In uncontrolled fermentations, the levan size/molar mass continuously decreased concomitantly with the continuous acidification of the nutrient medium. On the contrary, the amount, molar mass and size of levan could be directly influenced by controlling the pH during fermentation. Using equal initial substrate amounts, the largest weight average molar mass and geometric radius of levan were observed at constant pH 6.5, while the highest levan concentration was obtained at constant pH 4.5. Since there is a special demand to find suitable hydrocolloids from food-grade bacteria to develop novel gluten-free (GF) products, these differently produced levans were used for baking of GF breads, and the best quality improvement was obtained by addition of levan with the highest mass and radius. This work, therefore, demonstrates for the first time that one bacterial strain can produce specific high molecular weight fractions of one EPS type, which differ in properties and sizes among each other in dependence of the controllable production conditions. PMID:28522999

  6. Mathematical Modeling on the Growth and Removal of Non-metallic Inclusions in the Molten Steel in a Two-Strand Continuous Casting Tundish

    NASA Astrophysics Data System (ADS)

    Ling, Haitao; Zhang, Lifeng; Li, Hong

    2016-10-01

    In the current study, mathematical models were developed to predict the transient concentration and size distribution of inclusions in a two-strand continuous casting tundish. The collision and growth of inclusions were considered. The contribution of turbulent collision and Stokes collision was evaluated. The removal of inclusions from the top surface was modeled by considering the properties of inclusions and the molten steel, such as the wettability, density, size, and interfacial tension. The effect of composition of inclusions on the collision of inclusions was included through the Hamaker constant. Meanwhile, the effect of the turbulent fluctuation velocity on the removal of inclusions at the top surface was also studied. Inclusions in steel samples were detected using automatic SEM Scanning so that the amount, morphology, size, and composition of inclusions were achieved. In the simulation, the size distribution of inclusions at the end steel refining was used as the initial size distribution of inclusions at tundish inlet. The equilibrium time when the collision and coalescence of inclusions reached the steady state was equal to 3.9 times of the mean residence time. When Stokes collision, turbulent collision, and removal by floating were included, the removal fraction of inclusions was 16.4 pct. Finally, the removal of solid and liquid inclusions, such as Al2O3, SiO2, and 12CaO·7Al2O3, at the interface between the molten steel and slag was studied. Compared with 12CaO·7Al2O3 inclusions, the silica and alumina inclusions were much easier to be removed from the molten steel and their removal fractions were 36.5 and 39.2 pct, respectively.

  7. Gender differences in public and private drinking contexts: a multi-level GENACIS analysis.

    PubMed

    Bond, Jason C; Roberts, Sarah C M; Greenfield, Thomas K; Korcha, Rachael; Ye, Yu; Nayak, Madhabika B

    2010-05-01

    This multi-national study hypothesized that higher levels of country-level gender equality would predict smaller differences in the frequency of women's compared to men's drinking in public (like bars and restaurants) settings and possibly private (home or party) settings. GENACIS project survey data with drinking contexts included 22 countries in Europe (8); the Americas (7); Asia (3); Australasia (2), and Africa (2), analyzed using hierarchical linear models (individuals nested within country). Age, gender and marital status were individual predictors; country-level gender equality as well as equality in economic participation, education, and political participation, and reproductive autonomy and context of violence against women measures were country-level variables. In separate models, more reproductive autonomy, economic participation, and educational attainment and less violence against women predicted smaller differences in drinking in public settings. Once controlling for country-level economic status, only equality in economic participation predicted the size of the gender difference. Most country-level variables did not explain the gender difference in frequency of drinking in private settings. Where gender equality predicted this difference, the direction of the findings was opposite from the direction in public settings, with more equality predicting a larger gender difference, although this relationship was no longer significant after controlling for country-level economic status. Findings suggest that country-level gender equality may influence gender differences in drinking. However, the effects of gender equality on drinking may depend on the specific alcohol measure, in this case drinking context, as well as on the aspect of gender equality considered. Similar studies that use only global measures of gender equality may miss key relationships. We consider potential implications for alcohol related consequences, policy and public health.

  8. The across frequency independence of equalization of interaural time delay in the equalization-cancellation model of binaural unmasking.

    PubMed

    Akeroyd, Michael A

    2004-08-01

    The equalization stage in the equalization-cancellation model of binaural unmasking compensates for the interaural time delay (ITD) of a masking noise by introducing an opposite, internal delay [N. I. Durlach, in Foundations of Modern Auditory Theory, Vol. II., edited by J. V. Tobias (Academic, New York, 1972)]. Culling and Summerfield [J. Acoust. Soc. Am. 98, 785-797 (1995)] developed a multi-channel version of this model in which equalization was "free" to use the optimal delay in each channel. Two experiments were conducted to test if equalization was indeed free or if it was "restricted" to the same delay in all channels. One experiment measured binaural detection thresholds, using an adaptive procedure, for 1-, 5-, or 17-component tones against a broadband masking noise, in three binaural configurations (N0S180, N180S0, and N90S270). The thresholds for the 1-component stimuli were used to normalize the levels of each of the 5- and 17-component stimuli so that they were equally detectable. If equalization was restricted, then, for the 5- and 17-component stimuli, the N90S270 and N180S0 configurations would yield a greater threshold than the N0S180 configurations. No such difference was found. A subsequent experiment measured binaural detection thresholds, via psychometric functions, for a 2-component complex tone in the same three binaural configurations. Again, no differential effect of configuration was observed. An analytic model of the detection of a complex tone showed that the results were more consistent with free equalization than restricted equalization, although the size of the differences was found to depend on the shape of the psychometric function for detection.

  9. Non-uniform sampling: post-Fourier era of NMR data collection and processing.

    PubMed

    Kazimierczuk, Krzysztof; Orekhov, Vladislav

    2015-11-01

    The invention of multidimensional techniques in the 1970s revolutionized NMR, making it the general tool of structural analysis of molecules and materials. In the most straightforward approach, the signal sampling in the indirect dimensions of a multidimensional experiment is performed in the same manner as in the direct dimension, i.e. with a grid of equally spaced points. This results in lengthy experiments with a resolution often far from optimum. To circumvent this problem, numerous sparse-sampling techniques have been developed in the last three decades, including two traditionally distinct approaches: the radial sampling and non-uniform sampling. This mini review discusses the sparse signal sampling and reconstruction techniques from the point of view of an underdetermined linear algebra problem that arises when a full, equally spaced set of sampled points is replaced with sparse sampling. Additional assumptions that are introduced to solve the problem, as well as the shape of the undersampled Fourier transform operator (visualized as so-called point spread function), are shown to be the main differences between various sparse-sampling methods. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Laboratory simulations of planetary surfaces: Understanding regolith physical properties from remote photopolarimetric observations

    NASA Astrophysics Data System (ADS)

    Nelson, Robert M.; Boryta, Mark D.; Hapke, Bruce W.; Manatt, Kenneth S.; Shkuratov, Yuriy; Psarev, V.; Vandervoort, Kurt; Kroner, Desire; Nebedum, Adaze; Vides, Christina L.; Quiñones, John

    2018-03-01

    We present reflectance and polarization phase curve measurements of highly reflective planetary regolith analogues having physical characteristics expected on atmosphereless solar system bodies (ASSBs) such as a eucritic asteroids or icy satellites. We used a goniometric photopolarimeter (GPP) of novel design to study thirteen well-sorted particle size fractions of aluminum oxide (Al2O3). The sample suite included particle sizes larger than, approximately equal to, and smaller than the wavelength of the incident monochromatic radiation (λ = 635 nm). The observed phase angle, α, was 0.056 o < α < 15°. These Al2O3 particulate samples have very high normal reflectance (> ∼95%). The incident radiation has a very high probability of being multiply scattered before being backscattered toward the incident direction or ultimately absorbed. The five smallest particle sizes exhibited extremely high void space (> ∼95%). The reflectance phase curves for all particle size fractions show a pronounced non-linear reflectance increase with decreasing phase angle at α∼ < 3°. Our earlier studies suggest that the cause of this non-linear reflectance increase is constructive interference of counter-propagating waves in the medium by coherent backscattering (CB), a photonic analog of Anderson localization of electrons in solid state media. The polarization phase curves for particle size fractions with size parameter (particle radius/wavelength) r/λ < ∼1, show that the linear polarization rapidly decreases as α increases from 0°; it reaches a minimum near α = ∼2°. Longward of ∼2°, the negative polarization decreases as phase angle increases, becoming positive between 12° and at least 15°, (probably ∼20°) depending on particle size. For size parameters r/λ > ∼1 we detect no polarization. This polarization behavior is distinct from that observed in low albedo solar system objects such as the Moon and asteroids and for absorbing materials in the laboratory. We suggest this behavior arises because photons that are backscattered have a high probability of having interacted with two or more particles, thus giving rise to the CB process. These results may explain the unusual negative polarization behavior observed near small phase angles reported for several decades on highly reflective ASSBs such as the asteroids 44 Nysa, 64 Angelina and the Galilean satellites Io, Europa and Ganymede. Our results suggest these ASSB regoliths scatter electromagnetic radiation as if they were extremely fine grained with void space > ∼95%, and grain sizes of the order < = λ. This portends consequences for efforts to deploy landers on high ASSBs such as Europa. These results are also germane to the field of terrestrial geo-engineering, particularly to suggestions that earth's radiation balance can be modified by injecting Al2O3 particulates into the stratosphere thereby offsetting the effect of anthropogenic greenhouse gas emissions. The GPP used in this study was modified from our previous design so that the sample is presented with light that is alternatingly polarized perpendicular to and parallel to the scattering plane. There are no analyzers before the detector. This optical arrangement, following the Helmholtz Reciprocity Principle (HRP), produces a physically identical result to the traditional laboratory reflectance polarization measurements in which the incident light is unpolarized and the analyzers are placed before the detector. The results are identical in samples measured by both methods. We believe that ours is the first experimental demonstration of the HRP for polarized light, first proposed by Helmholtz in 1856.

  11. Exact Distributions of Intraclass Correlation and Cronbach's Alpha with Gaussian Data and General Covariance

    ERIC Educational Resources Information Center

    Kistner, Emily O.; Muller, Keith E.

    2004-01-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact…

  12. A new complete sample of submillijansky radio sources: An optical and near-infrared study

    NASA Technical Reports Server (NTRS)

    Masci, F.; Condon, J.; Barlow, T.; Lonsdale, C.; Xu, C.; Shupe, D.; Pevunova, O.; Fang, F.; Cutri, R.

    2001-01-01

    The Very Large Array has been used in C configuration to map an area similar or equal to 0.3 deg(2) at 1.4 GHz with 5 sigma sensitivities of 0.305, 0.325, 0.380, and 0.450 mJy beam(-1)over four equal subareas.

  13. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  14. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  15. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  16. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  17. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  18. 40 CFR 85.2224 - Exhaust analysis system-EPA 81.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are solely in effect. The following exceptions apply: In a state where the Administrator has approved... earlier model year vehicles or engines; in a state where the Administrator has approved a SIP revision... dual sample probe must provide equal flow in each leg. The equal flow criterion is considered to be met...

  19. Does Equal Socioeconomic Status in Black and White Men Mean Equal Risk of Mortality?

    ERIC Educational Resources Information Center

    Keil, Julian E.; And Others

    1992-01-01

    In a random sample of 1,088 men in the Charleston (South Carolina) Heart Study, differences in all-cause or coronary disease mortality rates were not significant for African-American and white males when socioeconomic status was controlled. Socioeconomic status appears to be an important predictor of mortality. (SLD)

  20. An Adaptive Niching Genetic Algorithm using a niche size equalization mechanism

    NASA Astrophysics Data System (ADS)

    Nagata, Yuichi

    Niching GAs have been widely investigated to apply genetic algorithms (GAs) to multimodal function optimization problems. In this paper, we suggest a new niching GA that attempts to form niches, each consisting of an equal number of individuals. The proposed GA can be applied also to combinatorial optimization problems by defining a distance metric in the search space. We apply the proposed GA to the job-shop scheduling problem (JSP) and demonstrate that the proposed niching method enhances the ability to maintain niches and improve the performance of GAs.

Top