Sample records for sample size selection

  1. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  2. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  3. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  4. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  5. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  6. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  7. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. 40 CFR 94.505 - Sample selection for testing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... engine family. The required sample size is zero if a manufacturer's projected annual production for all Category 1 engine families is less than 100. (ii) The required sample size for a Category 2 engine family... manufacturer will begin to select engines from each Category 1 and Category 2 engine family for production line...

  9. Selective counting and sizing of single virus particles using fluorescent aptamer-based nanoparticle tracking analysis.

    PubMed

    Szakács, Zoltán; Mészáros, Tamás; de Jonge, Marien I; Gyurcsányi, Róbert E

    2018-05-30

    Detection and counting of single virus particles in liquid samples are largely limited to narrow size distribution of viruses and purified formulations. To address these limitations, here we propose a calibration-free method that enables concurrently the selective recognition, counting and sizing of virus particles as demonstrated through the detection of human respiratory syncytial virus (RSV), an enveloped virus with a broad size distribution, in throat swab samples. RSV viruses were selectively labeled through their attachment glycoproteins (G) with fluorescent aptamers, which further enabled their identification, sizing and counting at the single particle level by fluorescent nanoparticle tracking analysis. The proposed approach seems to be generally applicable to virus detection and quantification. Moreover, it could be successfully applied to detect single RSV particles in swab samples of diagnostic relevance. Since the selective recognition is associated with the sizing of each detected particle, this method enables to discriminate viral elements linked to the virus as well as various virus forms and associations.

  10. Effect of finite sample size on feature selection and classification: a simulation study.

    PubMed

    Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping

    2010-02-01

    The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.

  11. 40 CFR 761.353 - Second level of sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Second level of sample selection. 761...-Site Disposal, in Accordance With § 761.61 § 761.353 Second level of sample selection. The second level of sample selection reduces the size of the 19-liter subsample that was collected according to...

  12. 76 FR 56141 - Notice of Intent To Request New Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...

  13. Effect of field view size and lighting on unique-hue selection using Natural Color System object colors.

    PubMed

    Shamey, Renzo; Zubair, Muhammad; Cheema, Hammad

    2015-08-01

    The aim of this study was twofold, first to determine the effect of field view size and second of illumination conditions on the selection of unique hue samples (UHs: R, Y, G and B) from two rotatable trays, each containing forty highly chromatic Natural Color System (NCS) samples, on one tray corresponding to 1.4° and on the other to 5.7° field of view size. UH selections were made by 25 color-normal observers who repeated assessments three times with a gap of at least 24h between trials. Observers separately assessed UHs under four illumination conditions simulating illuminants D65, A, F2 and F11. An apparent hue shift (statistically significant for UR) was noted for UH selections at 5.7° field of view compared to those at 1.4°. Observers' overall variability was found to be higher for UH stimuli selections at the larger field of view. Intra-observer variability was found to be approximately 18.7% of inter-observer variability in selection of samples for both sample sizes. The highest intra-observer variability was under simulated illuminant D65, followed by A, F11, and F2. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Uniform deposition of size-selected clusters using Lissajous scanning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beniya, Atsushi; Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp; Hirata, Hirohito

    2016-05-15

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonalmore » directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt{sub n} (n = 7, 15, 20) clusters uniformly deposited on the Al{sub 2}O{sub 3}/NiAl(110) surface and demonstrated the importance of uniform deposition.« less

  15. Bony pelvic canal size and shape in relation to body proportionality in humans.

    PubMed

    Kurki, Helen K

    2013-05-01

    Obstetric selection acts on the female pelvic canal to accommodate the human neonate and contributes to pelvic sexual dimorphism. There is a complex relationship between selection for obstetric sufficiency and for overall body size in humans. The relationship between selective pressures may differ among populations of different body sizes and proportions, as pelvic canal dimensions vary among populations. Size and shape of the pelvic canal in relation to body size and shape were examined using nine skeletal samples (total female n = 57; male n = 84) from diverse geographical regions. Pelvic, vertebral, and lower limb bone measurements were collected. Principal component analyses demonstrate pelvic canal size and shape differences among the samples. Male multivariate variance in pelvic shape is greater than female variance for North and South Africans. High-latitude samples have larger and broader bodies, and pelvic canals of larger size and, among females, relatively broader medio-lateral dimensions relative to low-latitude samples, which tend to display relatively expanded inlet antero-posterior (A-P) and posterior canal dimensions. Differences in canal shape exist among samples that are not associated with latitude or body size, suggesting independence of some canal shape characteristics from body size and shape. The South Africans are distinctive with very narrow bodies and small pelvic inlets relative to an elongated lower canal in A-P and posterior lengths. Variation in pelvic canal geometry among populations is consistent with a high degree of evolvability in the human pelvis. Copyright © 2013 Wiley Periodicals, Inc.

  16. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    NASA Astrophysics Data System (ADS)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  17. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    PubMed

    Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H

    2017-01-01

    In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  18. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    NASA Astrophysics Data System (ADS)

    Khanh Huynh, Cong; Duc, Trinh Vu

    2009-02-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  19. 40 CFR 1042.310 - Engine selection for Category 1 and Category 2 engines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Category 2 engines. (a) Determine minimum sample sizes as follows: (1) For Category 1 engines, the minimum sample size is one engine or one percent of the projected U.S.-directed production volume for all your Category 1 engine families, whichever is greater. (2) For Category 2 engines, the minimum sample size is...

  20. 40 CFR 90.706 - Engine sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... = emission test result for an individual engine. x = mean of emission test results of the actual sample. FEL... test with the last test result from the previous model year and then calculate the required sample size.... Test results used to calculate the variables in the following Sample Size Equation must be final...

  1. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    PubMed

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  2. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  3. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  4. Comparative analyses of basal rate of metabolism in mammals: data selection does matter.

    PubMed

    Genoud, Michel; Isler, Karin; Martin, Robert D

    2018-02-01

    Basal rate of metabolism (BMR) is a physiological parameter that should be measured under strictly defined experimental conditions. In comparative analyses among mammals BMR is widely used as an index of the intensity of the metabolic machinery or as a proxy for energy expenditure. Many databases with BMR values for mammals are available, but the criteria used to select metabolic data as BMR estimates have often varied and the potential effect of this variability has rarely been questioned. We provide a new, expanded BMR database reflecting compliance with standard criteria (resting, postabsorptive state; thermal neutrality; adult, non-reproductive status for females) and examine potential effects of differential selectivity on the results of comparative analyses. The database includes 1739 different entries for 817 species of mammals, compiled from the original sources. It provides information permitting assessment of the validity of each estimate and presents the value closest to a proper BMR for each entry. Using different selection criteria, several alternative data sets were extracted and used in comparative analyses of (i) the scaling of BMR to body mass and (ii) the relationship between brain mass and BMR. It was expected that results would be especially dependent on selection criteria with small sample sizes and with relatively weak relationships. Phylogenetically informed regression (phylogenetic generalized least squares, PGLS) was applied to the alternative data sets for several different clades (Mammalia, Eutheria, Metatheria, or individual orders). For Mammalia, a 'subsampling procedure' was also applied, in which random subsamples of different sample sizes were taken from each original data set and successively analysed. In each case, two data sets with identical sample size and species, but comprising BMR data with different degrees of reliability, were compared. Selection criteria had minor effects on scaling equations computed for large clades (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. <100) results often appeared erratic. Subsampling revealed that sample size has a non-linear effect on the probability of a zero slope for a given relationship. Depending on the species included, results could differ dramatically, especially with small sample sizes. Overall, our findings indicate a need for due diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.

  5. 77 FR 2697 - Proposed Information Collection; Comment Request; Annual Services Report

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-19

    ... and from a sample of small- and medium-sized businesses selected using a stratified sampling procedure... be canvassed when the sample is re-drawn, while nearly all of the small- and medium-sized firms from...); Educational Services (NAICS 61); Health Care and Social Assistance (NAICS 62); Arts, Entertainment, and...

  6. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    PubMed

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power

    PubMed Central

    Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943

  8. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.

    PubMed

    Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.

  9. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  10. 40 CFR 761.353 - Second level of sample selection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...

  11. 40 CFR 761.353 - Second level of sample selection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...

  12. 40 CFR 761.353 - Second level of sample selection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...

  13. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Size-selective separation of polydisperse gold nanoparticles in supercritical ethane.

    PubMed

    Williams, Dylan P; Satherley, John

    2009-04-09

    The aim of this study was to use supercritical ethane to selectively disperse alkanethiol-stabilized gold nanoparticles of one size from a polydisperse sample in order to recover a monodisperse fraction of the nanoparticles. A disperse sample of metal nanoparticles with diameters in the range of 1-5 nm was prepared using established techniques then further purified by Soxhlet extraction. The purified sample was subjected to supercritical ethane at a temperature of 318 K in the pressure range 50-276 bar. Particles were characterized by UV-vis absorption spectroscopy, TEM, and MALDI-TOF mass spectroscopy. The results show that with increasing pressure the dispersibility of the nanoparticles increases, this effect is most pronounced for smaller nanoparticles. At the highest pressure investigated a sample of the particles was effectively stripped of all the smaller particles leaving a monodisperse sample. The relationship between dispersibility and supercritical fluid density for two different size samples of alkanethiol-stabilized gold nanoparticles was considered using the Chrastil chemical equilibrium model.

  15. The structured ancestral selection graph and the many-demes limit.

    PubMed

    Slade, Paul F; Wakeley, John

    2005-02-01

    We show that the unstructured ancestral selection graph applies to part of the history of a sample from a population structured by restricted migration among subpopulations, or demes. The result holds in the limit as the number of demes tends to infinity with proportionately weak selection, and we have also made the assumptions of island-type migration and that demes are equivalent in size. After an instantaneous sample-size adjustment, this structured ancestral selection graph converges to an unstructured ancestral selection graph with a mutation parameter that depends inversely on the migration rate. In contrast, the selection parameter for the population is independent of the migration rate and is identical to the selection parameter in an unstructured population. We show analytically that estimators of the migration rate, based on pairwise sequence differences, derived under the assumption of neutrality should perform equally well in the presence of weak selection. We also modify an algorithm for simulating genealogies conditional on the frequencies of two selected alleles in a sample. This permits efficient simulation of stronger selection than was previously possible. Using this new algorithm, we simulate gene genealogies under the many-demes ancestral selection graph and identify some situations in which migration has a strong effect on the time to the most recent common ancestor of the sample. We find that a similar effect also increases the sensitivity of the genealogy to selection.

  16. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  17. The Risk of Adverse Impact in Selections Based on a Test with Known Effect Size

    ERIC Educational Resources Information Center

    De Corte, Wilfried; Lievens, Filip

    2005-01-01

    The authors derive the exact sampling distribution function of the adverse impact (AI) ratio for single-stage, top-down selections using tests with known effect sizes. Subsequently, it is shown how this distribution function can be used to determine the risk that a future selection decision on the basis of such tests will result in an outcome that…

  18. Estuarine sediment toxicity tests on diatoms: Sensitivity comparison for three species

    NASA Astrophysics Data System (ADS)

    Moreno-Garrido, Ignacio; Lubián, Luis M.; Jiménez, Begoña; Soares, Amadeu M. V. M.; Blasco, Julián

    2007-01-01

    Experimental populations of three marine and estuarine diatoms were exposed to sediments with different levels of pollutants, collected from the Aveiro Lagoon (NW of Portugal). The species selected were Cylindrotheca closterium, Phaeodactylum tricornutum and Navicula sp. Previous experiments were designed to determine the influence of the sediment particle size distribution on growth of the species assayed. Percentage of silt-sized sediment affect to growth of the selected species in the experimental conditions: the higher percentage of silt-sized sediment, the lower growth. In any case, percentages of silt-sized sediment less than 10% did not affect growth. In general, C. closterium seems to be slightly more sensitive to the selected sediments than the other two species. Two groups of sediment samples were determined as a function of the general response of the exposed microalgal populations: three of the six samples used were more toxic than the other three. Chemical analysis of the samples was carried out in order to determine the specific cause of differences in toxicity. After a statistical analysis, concentrations of Sn, Zn, Hg, Cu and Cr (among all physico-chemical analyzed parameters), in order of importance, were the most important factors that divided the two groups of samples (more and less toxic samples). Benthic diatoms seem to be sensitive organisms in sediment toxicity tests. Toxicity data from bioassays involving microphytobentos should be taken into account when environmental risks are calculated.

  19. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    PubMed

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Robust gene selection methods using weighting schemes for microarray data analysis.

    PubMed

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  1. Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall

    NASA Astrophysics Data System (ADS)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-11-01

    The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.

  2. Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size

    USGS Publications Warehouse

    Rubin, David M.; Chezar, Henry; Harney, Jodi N.; Topping, David J.; Melis, Theodore S.; Sherwood, Christopher R.

    2007-01-01

    For more than a century, studies of sedimentology and sediment transport have measured bed-sediment grain size by collecting samples and transporting them back to the laboratory for grain-size analysis. This process is slow and expensive. Moreover, most sampling systems are not selective enough to sample only the surficial grains that interact with the flow; samples typically include sediment from at least a few centimeters beneath the bed surface. New hardware and software are available for in situ measurement of grain size. The new technology permits rapid measurement of surficial bed sediment. Here we describe several systems we have deployed by boat, by hand, and by tripod in rivers, oceans, and on beaches.

  3. Underwater Microscope for Measuring Spatial and Temporal Changes in Bed-Sediment Grain Size

    USGS Publications Warehouse

    Rubin, David M.; Chezar, Henry; Harney, Jodi N.; Topping, David J.; Melis, Theodore S.; Sherwood, Christopher R.

    2006-01-01

    For more than a century, studies of sedimentology and sediment transport have measured bed-sediment grain size by collecting samples and transporting them back to the lab for grain-size analysis. This process is slow and expensive. Moreover, most sampling systems are not selective enough to sample only the surficial grains that interact with the flow; samples typically include sediment from at least a few centimeters beneath the bed surface. New hardware and software are available for in-situ measurement of grain size. The new technology permits rapid measurement of surficial bed sediment. Here we describe several systems we have deployed by boat, by hand, and by tripod in rivers, oceans, and on beaches.

  4. Influence of tree spatial pattern and sample plot type and size on inventory

    Treesearch

    John-Pascall Berrill; Kevin L. O' Hara

    2012-01-01

    Sampling with different plot types and sizes was simulated using tree location maps and data collected in three even-aged coast redwood (Sequoia sempervirens) stands selected to represent uniform, random, and clumped spatial patterns of tree locations. Fixed-radius circular plots, belt transects, and variable-radius plots were installed by...

  5. Atomically precise (catalytic) particles synthesized by a novel cluster deposition instrument

    DOE PAGES

    Yin, C.; Tyo, E.; Kuchta, K.; ...

    2014-05-06

    Here, we report a new high vacuum instrument which is dedicated to the preparation of well-defined clusters supported on model and technologically relevant supports for catalytic and materials investigations. The instrument is based on deposition of size selected metallic cluster ions that are produced by a high flux magnetron cluster source. Furthermore, we maximize the throughput of the apparatus by collecting and focusing ions utilizing a conical octupole ion guide and a linear ion guide. The size selection is achieved by a quadrupole mass filter. The new design of the sample holder provides for the preparation of multiple samples onmore » supports of various sizes and shapes in one session. After cluster deposition onto the support of interest, samples will be taken out of the chamber for a variety of testing and characterization.« less

  6. Influence of BMI and dietary restraint on self-selected portions of prepared meals in US women.

    PubMed

    Labbe, David; Rytz, Andréas; Brunstrom, Jeffrey M; Forde, Ciarán G; Martin, Nathalie

    2017-04-01

    The rise of obesity prevalence has been attributed in part to an increase in food and beverage portion sizes selected and consumed among overweight and obese consumers. Nevertheless, evidence from observations of adults is mixed and contradictory findings might reflect the use of small or unrepresentative samples. The objective of this study was i) to determine the extent to which BMI and dietary restraint predict self-selected portion sizes for a range of commercially available prepared savoury meals and ii) to consider the importance of these variables relative to two previously established predictors of portion selection, expected satiation and expected liking. A representative sample of female consumers (N = 300, range 18-55 years) evaluated 15 frozen savoury prepared meals. For each meal, participants rated their expected satiation and expected liking, and selected their ideal portion using a previously validated computer-based task. Dietary restraint was quantified using the Dutch Eating Behaviour Questionnaire (DEBQ-R). Hierarchical multiple regression was performed on self-selected portions with age, hunger level, and meal familiarity entered as control variables in the first step of the model, expected satiation and expected liking as predictor variables in the second step, and DEBQ-R and BMI as exploratory predictor variables in the third step. The second and third steps significantly explained variance in portion size selection (18% and 4%, respectively). Larger portion selections were significantly associated with lower dietary restraint and with lower expected satiation. There was a positive relationship between BMI and portion size selection (p = 0.06) and between expected liking and portion size selection (p = 0.06). Our discussion considers future research directions, the limited variance explained by our model, and the potential for portion size underreporting by overweight participants. Copyright © 2016 Nestec S.A. Published by Elsevier Ltd.. All rights reserved.

  7. Thoracic and respirable particle definitions for human health risk assessment.

    PubMed

    Brown, James S; Gordon, Terry; Price, Owen; Asgharian, Bahman

    2013-04-10

    Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects.

  8. Thoracic and respirable particle definitions for human health risk assessment

    PubMed Central

    2013-01-01

    Background Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. Methods We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Results Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. Conclusions By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects. PMID:23575443

  9. A New Sample Size Formula for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…

  10. Effects of sampling techniques on physical parameters and concentrations of selected persistent organic pollutants in suspended matter.

    PubMed

    Pohlert, Thorsten; Hillebrand, Gudrun; Breitung, Vera

    2011-06-01

    This study focusses on the effect of sampling techniques for suspended matter in stream water on subsequent particle-size distribution and concentrations of total organic carbon and selected persistent organic pollutants. The key questions are whether differences between the sampling techniques are due to the separation principle of the devices or due to the difference between time-proportional versus integral sampling. Several multivariate homogeneity tests were conducted on an extensive set of field-data that covers the period from 2002 to 2007, when up to three different sampling techniques were deployed in parallel at four monitoring stations of the River Rhine. The results indicate homogeneity for polychlorinated biphenyls, but significant effects due to the sampling techniques on particle-size, organic carbon and hexachlorobenzene. The effects can be amplified depending on the site characteristics of the monitoring stations.

  11. Forestry inventory based on multistage sampling with probability proportional to size

    NASA Technical Reports Server (NTRS)

    Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.

    1983-01-01

    A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.

  12. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.

  13. An anthropometric analysis of Korean male helicopter pilots for helicopter cockpit design.

    PubMed

    Lee, Wonsup; Jung, Kihyo; Jeong, Jeongrim; Park, Jangwoon; Cho, Jayoung; Kim, Heeeun; Park, Seikwon; You, Heecheon

    2013-01-01

    This study measured 21 anthropometric dimensions (ADs) of 94 Korean male helicopter pilots in their 20s to 40s and compared them with corresponding measurements of Korean male civilians and the US Army male personnel. The ADs and the sample size of the anthropometric survey were determined by a four-step process: (1) selection of ADs related to helicopter cockpit design, (2) evaluation of the importance of each AD, (3) calculation of required sample sizes for selected precision levels and (4) determination of an appropriate sample size by considering both the AD importance evaluation results and the sample size requirements. The anthropometric comparison reveals that the Korean helicopter pilots are larger (ratio of means = 1.01-1.08) and less dispersed (ratio of standard deviations = 0.71-0.93) than the Korean male civilians and that they are shorter in stature (0.99), have shorter upper limbs (0.89-0.96) and lower limbs (0.93-0.97), but are taller on sitting height, sitting eye height and acromial height (1.01-1.03), and less dispersed (0.68-0.97) than the US Army personnel. The anthropometric characteristics of Korean male helicopter pilots were compared with those of Korean male civilians and US Army male personnel. The sample size determination process and the anthropometric comparison results presented in this study are useful to design an anthropometric survey and a helicopter cockpit layout, respectively.

  14. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  15. Bivariate mass-size relation as a function of morphology as determined by Galaxy Zoo 2 crowdsourced visual classifications

    NASA Astrophysics Data System (ADS)

    Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie

    2016-01-01

    It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.

  16. Membrane Bioprobe Electrodes

    ERIC Educational Resources Information Center

    Rechnitz, Garry A.

    1975-01-01

    Describes the design of ion selective electrodes coupled with immobilized enzymes which operate either continuously or on drop-sized samples. Cites techniques for urea, L-phenylalanine and amygdalin. Micro size electrodes for use in single cells are discussed. (GH)

  17. Phenotypic constraints promote latent versatility and carbon efficiency in metabolic networks.

    PubMed

    Bardoscia, Marco; Marsili, Matteo; Samal, Areejit

    2015-07-01

    System-level properties of metabolic networks may be the direct product of natural selection or arise as a by-product of selection on other properties. Here we study the effect of direct selective pressure for growth or viability in particular environments on two properties of metabolic networks: latent versatility to function in additional environments and carbon usage efficiency. Using a Markov chain Monte Carlo (MCMC) sampling based on flux balance analysis (FBA), we sample from a known biochemical universe random viable metabolic networks that differ in the number of directly constrained environments. We find that the latent versatility of sampled metabolic networks increases with the number of directly constrained environments and with the size of the networks. We then show that the average carbon wastage of sampled metabolic networks across the constrained environments decreases with the number of directly constrained environments and with the size of the networks. Our work expands the growing body of evidence about nonadaptive origins of key functional properties of biological networks.

  18. Concentrations of selected constituents in surface-water and streambed-sediment samples collected from streams in and near an area of oil and natural-gas development, south-central Texas, 2011-13

    USGS Publications Warehouse

    Opsahl, Stephen P.; Crow, Cassi L.

    2014-01-01

    During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.

  19. Size and modal analyses of fines and ultrafines from some Apollo 17 samples

    NASA Technical Reports Server (NTRS)

    Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.

    1975-01-01

    Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.

  20. When bigger is not better: selection against large size, high condition and fast growth in juvenile lemon sharks.

    PubMed

    Dibattista, J D; Feldheim, K A; Gruber, S H; Hendry, A P

    2007-01-01

    Selection acting on large marine vertebrates may be qualitatively different from that acting on terrestrial or freshwater organisms, but logistical constraints have thus far precluded selection estimates for the former. We overcame these constraints by exhaustively sampling and repeatedly recapturing individuals in six cohorts of juvenile lemon sharks (450 age-0 and 255 age-1 fish) at an enclosed nursery site (Bimini, Bahamas). Data on individual size, condition factor, growth rate and inter-annual survival were used to test the 'bigger is better', 'fatter is better' and 'faster is better' hypotheses of life-history theory. For age-0 sharks, selection on all measured traits was weak, and generally acted against large size and high condition. For age-1 sharks, selection was much stronger, and consistently acted against large size and fast growth. These results suggest that selective pressures at Bimini may be constraining the evolution of large size and fast growth, an observation that fits well with the observed small size and low growth rate of juveniles at this site. Our results support those of some other recent studies in suggesting that bigger/fatter/faster is not always better, and may often be worse.

  1. Influence of sampling window size and orientation on parafoveal cone packing density

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Ducoli, Pietro; Lombardo, Giuseppe

    2013-01-01

    We assessed the agreement between sampling windows of different size and orientation on packing density estimates in images of the parafoveal cone mosaic acquired using a flood-illumination adaptive optics retinal camera. Horizontal and vertical oriented sampling windows of different size (320x160 µm, 160x80 µm and 80x40 µm) were selected in two retinal locations along the horizontal meridian in one eye of ten subjects. At each location, cone density tended to decline with decreasing sampling area. Although the differences in cone density estimates were not statistically significant, Bland-Altman plots showed that the agreement between cone density estimated within the different sampling window conditions was moderate. The percentage of the preferred packing arrangements of cones by Voronoi tiles was slightly affected by window size and orientation. The results illustrated the high importance of specifying the size and orientation of the sampling window used to derive cone metric estimates to facilitate comparison of different studies. PMID:24009995

  2. Selecting the optimum plot size for a California design-based stream and wetland mapping program.

    PubMed

    Lackey, Leila G; Stein, Eric D

    2014-04-01

    Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.

  3. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  4. College Climate and Teacher-Trainee's Academic Work in Selected Colleges of Education in the Ashanti Region of Ghana

    ERIC Educational Resources Information Center

    Adjei, Augustine; Dontoh, Samuel; Baafi-Frimpong, Stephen

    2017-01-01

    The study aimed at investigating the extent to which College climate (Leadership roles/practices and Class size) impact on academic work of Teacher-trainees. A survey research design was used for the study because it involved a study of relatively large population who were purposively and randomly selected. A sample size of 322 out of the…

  5. Use of Bayes theorem to correct size-specific sampling bias in growth data.

    PubMed

    Troynikov, V S

    1999-03-01

    The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.

  6. A Monte Carlo Program for Simulating Selection Decisions from Personnel Tests

    ERIC Educational Resources Information Center

    Petersen, Calvin R.; Thain, John W.

    1976-01-01

    Relative to test and criterion parameters and cutting scores, the correlation coefficient, sample size, and number of samples to be drawn (all inputs), this program calculates decision classification rates across samples and for combined samples. Several other related indices are also computed. (Author)

  7. Sample size determination for equivalence assessment with multiple endpoints.

    PubMed

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  8. Influence of size-fractioning techniques on concentrations of selected trace metals in bottom materials from two streams in northeastern Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Helsel, Dennis R.

    1986-01-01

    Identical stream-bottom material samples, when fractioned to the same size by different techniques, may contain significantly different trace-metal concentrations. Precision of techniques also may differ, which could affect the ability to discriminate between size-fractioned bottom-material samples having different metal concentrations. Bottom-material samples fractioned to less than 0.020 millimeters by means of three common techniques (air elutriation, sieving, and settling) were analyzed for six trace metals to determine whether the technique used to obtain the desired particle-size fraction affects the ability to discriminate between bottom materials having different trace-metal concentrations. In addition, this study attempts to assess whether median trace-metal concentrations in size-fractioned bottom materials of identical origin differ depending on the size-fractioning technique used. Finally, this study evaluates the efficiency of the three size-fractioning techniques in terms of time, expense, and effort involved. Bottom-material samples were collected at two sites in northeastern Ohio: One is located in an undeveloped forested basin, and the other is located in a basin having a mixture of industrial and surface-mining land uses. The sites were selected for their close physical proximity, similar contributing drainage areas, and the likelihood that trace-metal concentrations in the bottom materials would be significantly different. Statistically significant differences in the concentrations of trace metals were detected between bottom-material samples collected at the two sites when the samples had been size-fractioned by means of air elutriation or sieving. Statistical analyses of samples that had been size fractioned by settling in native water were not measurably different in any of the six trace metals analyzed. Results of multiple comparison tests suggest that differences related to size-fractioning technique were evident in median copper, lead, and iron concentrations. Technique-related differences in copper concentrations most likely resulted from contamination of air-elutriated samples by a feed tip on the elutriator apparatus. No technique-related differences were observed in chromium, manganese, or zinc concentrations. Although air elutriation was the most expensive sizefractioning technique investigated, samples fractioned by this technique appeared to provide a superior level of discrimination between metal concentrations present in the bottom materials of the two sites. Sieving was an adequate lower-cost but more laborintensive alternative.

  9. Methodological issues with adaptation of clinical trial design.

    PubMed

    Hung, H M James; Wang, Sue-Jane; O'Neill, Robert T

    2006-01-01

    Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.

  10. 10 CFR 431.135 - Units to be tested.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... EQUIPMENT Automatic Commercial Ice Makers Test Procedures § 431.135 Units to be tested. For each basic model of automatic commercial ice maker selected for testing, a sample of sufficient size shall be selected...

  11. Does self-selection affect samples' representativeness in online surveys? An investigation in online video game research.

    PubMed

    Khazaal, Yasser; van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-07-07

    The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Our objective was to explore the representativeness of a self-selected sample of online gamers using online players' virtual characters (avatars). All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars' characteristics were defined using various games' scores, reported on the WoW's official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted.

  12. Automated fluid analysis apparatus and techniques

    DOEpatents

    Szecsody, James E.

    2004-03-16

    An automated device that couples a pair of differently sized sample loops with a syringe pump and a source of degassed water. A fluid sample is mounted at an inlet port and delivered to the sample loops. A selected sample from the sample loops is diluted in the syringe pump with the degassed water and fed to a flow through detector for analysis. The sample inlet is also directly connected to the syringe pump to selectively perform analysis without dilution. The device is airtight and used to detect oxygen-sensitive species, such as dithionite in groundwater following a remedial injection to treat soil contamination.

  13. A standardized sampling protocol for channel catfish in prairie streams

    USGS Publications Warehouse

    Vokoun, Jason C.; Rabeni, Charles F.

    2001-01-01

    Three alternative gears—an AC electrofishing raft, bankpoles, and a 15-hoop-net set—were used in a standardized manner to sample channel catfish Ictalurus punctatus in three prairie streams of varying size in three seasons. We compared these gears as to time required per sample, size selectivity, mean catch per unit effort (CPUE) among months, mean CPUE within months, effect of fluctuating stream stage, and sensitivity to population size. According to these comparisons, the 15-hoop-net set used during stable water levels in October had the most desirable characteristics. Using our catch data, we estimated the precision of CPUE and size structure by varying sample sizes for the 15-hoop-net set. We recommend that 11–15 repetitions of the 15-hoop-net set be used for most management activities. This standardized basic unit of effort will increase the precision of estimates and allow better comparisons among samples as well as increased confidence in management decisions.

  14. Body Size, Fecundity, and Sexual Size Dimorphism in the Neotropical Cricket Macroanaxipha macilenta (Saussure) (Orthoptera: Gryllidae).

    PubMed

    Cueva Del Castillo, R

    2015-04-01

    Body size is directly or indirectly correlated with fitness. Body size, which conveys maximal fitness, often differs between sexes. Sexual size dimorphism (SSD) evolves because body size tends to be related to reproductive success through different pathways in males and females. In general, female insects are larger than males, suggesting that natural selection for high female fecundity could be stronger than sexual selection in males. I assessed the role of body size and fecundity in SSD in the Neotropical cricket Macroanaxipha macilenta (Saussure). This species shows a SSD bias toward males. Females did not present a correlation between number of eggs and body size. Nonetheless, there were fluctuations in the number of eggs carried by females during the sampling period, and the size of females that were collected carrying eggs was larger than that of females collected with no eggs. Since mating induces vitellogenesis in some cricket species, differences in female body size might suggest male mate choice. Sexual selection in the body size of males of M. macilenta may possibly be stronger than the selection of female fecundity. Even so, no mating behavior was observed during the field observations, including audible male calling or courtship songs, yet males may produce ultrasonic calls due to their size. If female body size in M. macilenta is not directly related to fecundity, the lack of a correlated response to selection on female body size could represent an alternate evolutionary pathway in the evolution of body size and SSD in insects.

  15. Soft γ-ray selected radio galaxies: favouring giant size discovery

    NASA Astrophysics Data System (ADS)

    Bassani, L.; Venturi, T.; Molina, M.; Malizia, A.; Dallacasa, D.; Panessa, F.; Bazzano, A.; Ubertini, P.

    2016-09-01

    Using the recent INTEGRAL/IBIS and Swift/BAT surveys we have extracted a sample of 64 confirmed plus three candidate radio galaxies selected in the soft gamma-ray band. The sample covers all optical classes and is dominated by objects showing a Fanaroff-Riley type II radio morphology; a large fraction (70 per cent) of the sample is made of `radiative mode' or high-excitation radio galaxies. We measured the source size on images from the NRAO VLA Sky Survey, the Faint Images of the Radio Sky at twenty-cm and the Sydney University Molonglo Sky Survey images and have compared our findings with data in the literature obtaining a good match. We surprisingly found that the soft gamma-ray selection favours the detection of large size radio galaxies: 60 per cent of objects in the sample have size greater than 0.4 Mpc while around 22 per cent reach dimension above 0.7 Mpc at which point they are classified as giant radio galaxies (GRGs), the largest and most energetic single entities in the Universe. Their fraction among soft gamma-ray selected radio galaxies is significantly larger than typically found in radio surveys, where only a few per cent of objects (1-6 per cent) are GRGs. This may partly be due to observational biases affecting radio surveys more than soft gamma-ray surveys, thus disfavouring the detection of GRGs at lower frequencies. The main reasons and/or conditions leading to the formation of these large radio structures are still unclear with many parameters such as high jet power, long activity time and surrounding environment all playing a role; the first two may be linked to the type of active galactic nucleus discussed in this work and partly explain the high fraction of GRGs found in the present sample. Our result suggests that high energy surveys may be a more efficient way than radio surveys to find these peculiar objects.

  16. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  17. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  18. 40 CFR 761.243 - Standard wipe sample method and size.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., AND USE PROHIBITIONS Determining a PCB Concentration for Purposes of Abandonment or Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe.../Rinse Cleanup as Recommended by the Environmental Protection Agency PCB Spill Cleanup Policy,” dated...

  19. Perspective: Size selected clusters for catalysis and electrochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro

    We report that size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this Perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition,more » cluster-support interactions and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modelling based on density functional theory sampling of local minima and energy barriers or ab initio Molecular Dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Lastly, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.« less

  20. Perspective: Size selected clusters for catalysis and electrochemistry

    DOE PAGES

    Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro; ...

    2018-03-15

    We report that size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this Perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition,more » cluster-support interactions and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modelling based on density functional theory sampling of local minima and energy barriers or ab initio Molecular Dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Lastly, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.« less

  1. Perspective: Size selected clusters for catalysis and electrochemistry

    NASA Astrophysics Data System (ADS)

    Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro; Vajda, Stefan

    2018-03-01

    Size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization, and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition, cluster-support interactions, and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modeling based on density functional theory sampling of local minima and energy barriers or ab initio molecular dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Finally, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.

  2. Exact tests using two correlated binomial variables in contemporary cancer clinical trials.

    PubMed

    Yu, Jihnhee; Kepner, James L; Iyer, Renuka

    2009-12-01

    New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.

  3. Particle size variations between bed load and bed material in natural gravel bed channels

    Treesearch

    Thomas E. Lisle

    1995-01-01

    Abstract - Particle sizes of bed load and bed material that represent materials transported and stored over a period of years are used to investigate selective transport in 13 previously sampled, natural gravel bed channels. The ratio (D*) of median particle size of bed material to the transport- and frequency-weighted mean of median bed load size decreases to unity...

  4. [Diet, selectivity and trophic overlap between the sizes of silverside Menidia humboldtiana (Atheriniformes: Atherinopsidae) in the reservoir Tiacaque, Mexico].

    PubMed

    Sánchez, Regina; Ochoa, Abigahil; Mendoza, Angélica

    2013-06-01

    D Menidia humboldtiana, a native species of Mexico, is a common inhabitant of local reservoirs. It represents a highly appreciated fish of economic importance in the central part of the country because of its delicate flavor. Trophic behavior of this species is important to understand the relationships with other fish species in reservoirs. With the aim to study this specific topic, the trophic spectrum, selectivity coefficient and overlap, were determined among different sizes of the Silverside M humboldtiana. For this, both zooplankton and fish samples were taken during four different seasons of 1995. Zooplankton samples were taken through a mesh (125 micron), and all organisms were identified to generic level. Fish were captured and grouped into standard length intervals per season, and the stomach contents were obtained and analyzed. Trophic interactions included the stomach contents analysis (Laevastu method), the coefficient of selection (Chesson) and the trophic overlap (Morisita index modified by Horn) between sizes. A total of 14 zooplankton genera were identified, of which Bosmina was the most abundant (29 625 ind./10 L) followed by Cyclops (9496 ind./10 L), during the spring. Small size fishes (1-4.9cm) consumed high percentages of Cyclops in the spring (61.24%) and winter (69.82%). Ceriodaphnia was consumed by fish sizes of 3-10.9cm (72.41%) and 13-14.9cm (95.5%) during the summer; while in autumn, small sizes (1-4.9cm) ingested Mastigodiaptomus and Ceriodaphnia; Daphnia and Bosmina were consumed by fishes of 5-8.9cm and the biggest sizes (9-14.9 cm) feed on Ceriodaphnia. M. humboldtiana makes a selective predation by the genera Ceriodaphnia, Daphnia, Mastigodiaptomus, Bosmina and Cyclops, depending on the size length interval. The trophic overlap was very marked among all sizes on spring, autumn and winter, unlike in summer fish of 1-2.9 and 11-12.9 cm did not show overlap with other length intervals. M humboldtiana is a zooplanktivore species, which performs a selective predation and a marked trophic overlap between the different fish sizes.

  5. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    PubMed

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  6. Scale-Dependent Habitat Selection and Size-Based Dominance in Adult Male American Alligators

    PubMed Central

    Strickland, Bradley A.; Vilella, Francisco J.; Belant, Jerrold L.

    2016-01-01

    Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range) then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17) on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their social dominance and space use can provide great insight into their functional role in the ecosystem. PMID:27588947

  7. Scale-dependent habitat selection and size-based dominance in adult male American alligators

    USGS Publications Warehouse

    Strickland, Bradley A.; Vilella, Francisco; Belant, Jerrold L.

    2016-01-01

    Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range) then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17) on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their social dominance and space use can provide great insight into their functional role in the ecosystem.

  8. Hierarchical modeling of cluster size in wildlife surveys

    USGS Publications Warehouse

    Royle, J. Andrew

    2008-01-01

    Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).

  9. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    PubMed

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  10. Maximizing the reliability of genomic selection by optimizing the calibration set of reference individuals: comparison of methods in two diverse groups of maize inbreds (Zea mays L.).

    PubMed

    Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L

    2012-10-01

    Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.

  11. DOSESCREEN: a computer program to aid dose placement

    Treesearch

    Kimberly C. Smith; Jacqueline L. Robertson

    1984-01-01

    Careful selection of an experimental design for a bioassay substantially improves the precision of effective dose (ED) estimates. Design considerations typically include determination of sample size, dose selection, and allocation of subjects to doses. DOSESCREEN is a computer program written to help investigators select an efficient design for the estimation of an...

  12. Minimum Sample Size Requirements for Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas

    2014-01-01

    An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…

  13. 29 CFR 1607.15 - Documentation of impact and validity evidence.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (essential). (6) Sample description. A description of how the research sample was identified and selected... the size of each subgroup (essential). A description of how the research sample compares with the...). Any quantitative data which identify or define the job constructs, such as factor analyses, should be...

  14. 29 CFR 1607.15 - Documentation of impact and validity evidence.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (essential). (6) Sample description. A description of how the research sample was identified and selected... the size of each subgroup (essential). A description of how the research sample compares with the...). Any quantitative data which identify or define the job constructs, such as factor analyses, should be...

  15. 29 CFR 1607.15 - Documentation of impact and validity evidence.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (essential). (6) Sample description. A description of how the research sample was identified and selected... the size of each subgroup (essential). A description of how the research sample compares with the...). Any quantitative data which identify or define the job constructs, such as factor analyses, should be...

  16. 29 CFR 1607.15 - Documentation of impact and validity evidence.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (essential). (6) Sample description. A description of how the research sample was identified and selected... the size of each subgroup (essential). A description of how the research sample compares with the...). Any quantitative data which identify or define the job constructs, such as factor analyses, should be...

  17. Highly selective and sensitive determination of dopamine in biological samples via tuning the particle size of label-free gold nanoparticles

    NASA Astrophysics Data System (ADS)

    Mohseni, Naimeh; Bahram, Morteza

    2018-03-01

    Herein, a rapid, sensitive and selective approach for the colorimetric detection of dopamine (DA) was developed utilizing unmodified gold nanoparticles (AuNPs). This assay relied upon the size-dependent aggregation behavior of DA and three other structurally similar catecholamines (CAs), offering highly specific and accurate detection of DA. By means of this study, we attempted to overcome the tedious procedures of surface premodifications and achieve selectivity through tuning the particle size of AuNPs. DA could induce the aggregation of the AuNPs via hydrogen-bonding interactions, resulting in a color change from pink to blue which can be monitored by spectrophotometry or even the naked-eye. The proposed colorimetric probe works over the 0.1 to 4 μM DA concentration range, with a lower detection limit (LOD) of 22 nM, which is much lower than the therapeutic lowest abnormal concentrations of DA in urine (0.57 μM) and blood (16 μM) samples. Furthermore, the selectivity and potential applicability of the developed method in spiked actual biological (human plasma and urine) specimens were investigated, suggesting that the present assay could satisfy the requirements for clinical diagnostics and biosensors.

  18. Structure and Mechanical Properties of the AlSi10Mg Alloy Samples Manufactured by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Li, Xiaodan; Ni, Jiaqiang; Zhu, Qingfeng; Su, Hang; Cui, Jianzhong; Zhang, Yifei; Li, Jianzhong

    2017-11-01

    The AlSi10Mg alloy samples with the size of 14×14×91mm were produced by the selective laser melting (SLM) method in different building direction. The structures and the properties at -70°C of the sample in different direction were investigated. The results show that the structure in different building direction shows different morphology. The fish scale structures distribute on the side along the building direction, and the oval structures distribute on the side vertical to the building direction. Some pores in with the maximum size of 100 μm exist of the structure. And there is no major influence for the build orientation on the tensile properties. The tensile strength and the elongation of the sample in the building direction are 340 Mpa and 11.2 % respectively. And the tensile strength and the elongation of the sample vertical to building direction are 350 Mpa and 13.4 % respectively

  19. Does Self-Selection Affect Samples’ Representativeness in Online Surveys? An Investigation in Online Video Game Research

    PubMed Central

    van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-01-01

    Background The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Objective Our objective was to explore the representativeness of a self-selected sample of online gamers using online players’ virtual characters (avatars). Methods All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars’ characteristics were defined using various games’ scores, reported on the WoW’s official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. Results We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Conclusions Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted. PMID:25001007

  20. Capital Budgeting Decisions with Post-Audit Information

    DTIC Science & Technology

    1990-06-08

    estimates that were used during project selection. In similar fashion, this research introduces the equivalent sample size concept that permits the... equivalent sample size is extended to include the user’s prior beliefs. 4. For a management tool, the concepts for Cash Flow Control Charts are...Acoxxting Research , vol. 7, no. 2, Autumn 1969, pp. 215-244. [9] Gaynor, Edwin W., "Use of Control Charts in Cost Control ", National Association of Cost

  1. Sampling methods for amphibians in streams in the Pacific Northwest.

    Treesearch

    R. Bruce Bury; Paul Stephen Corn

    1991-01-01

    Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

  2. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  3. Photo series for quantifying natural forest residues: southern Cascades, northern Sierra Nevada

    Treesearch

    Kenneth S. Blonski; John L. Schramel

    1981-01-01

    A total of 56 photographs shows different levels of natural fuel loadings for selected size classes in seven forest types of the southern Cascade and northern Sierra-Nevada ranges. Data provided with each photo include size, weight, volumes, residue depths, and percent of ground coverage. Stand information includes sizes, weights, and volumes of the trees sampled for...

  4. Inherent size effects on XANES of nanometer metal clusters: Size-selected platinum clusters on silica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Yang; Gorey, Timothy J.; Anderson, Scott L.

    2016-12-12

    X-ray absorption near-edge structure (XANES) is commonly used to probe the oxidation state of metal-containing nanomaterials, however, as the particle size in the material drops below a few nanometers, it becomes important to consider inherent size effects on the electronic structure of the materials. In this paper, we analyze a series of size-selected Pt n/SiO 2 samples, using X-ray photoelectron spectroscopy (XPS), low energy ion scattering, grazing-incidence small angle X-ray scattering, and XANES. The oxidation state and morphology are characterized both as-deposited in UHV, and after air/O 2 exposure and annealing in H 2. Here, the clusters are found tomore » be stable during deposition and upon air exposure, but sinter if heated above ~150 °C. XANES shows shifts in the Pt L 3 edge, relative to bulk Pt, that increase with decreasing cluster size, and the cluster samples show high white line intensity. Reference to bulk standards would suggest that the clusters are oxidized, however, XPS shows that they are not. Instead, the XANES effects are attributable to development of a band gap and localization of empty state wavefunctions in small clusters.« less

  5. Random Distribution Pattern and Non-adaptivity of Genome Size in a Highly Variable Population of Festuca pallens

    PubMed Central

    Šmarda, Petr; Bureš, Petr; Horová, Lucie

    2007-01-01

    Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968

  6. The size-reduced Eudragit® RS microparticles prepared by solvent evaporation method - monitoring the effect of selected variables on tested parameters.

    PubMed

    Vasileiou, Kalliopi; Vysloužil, Jakub; Pavelková, Miroslava; Vysloužil, Jan; Kubová, Kateřina

    2018-01-01

    Size-reduced microparticles were successfully obtained by solvent evaporation method. Different parameters were applied in each sample and their influence on microparticles was evaluated. As a model drug the insoluble ibuprofen was selected for the encapsulation process with Eudragit® RS. The obtained microparticles were inspected by optical microscopy and scanning electron microscopy. The effect of aqueous phase volume (600, 400, 200 ml) and the concentration of polyvinyl alcohol (PVA; 1.0% and 0.1%) were studied. It was evaluated how those variations and also size can affect microparticle characteristics such as encapsulation efficiency, drug loading, burst effect and microparticle morphology. It was observed that the sample prepared with 600 ml aqueous phase and 1% concentration of polyvinyl alcohol gave the most favorable results.Key words: microparticles solvent evaporation sustained drug release Eudragit RS®.

  7. The Empirical Selection of Anchor Items Using a Multistage Approach

    ERIC Educational Resources Information Center

    Craig, Brandon

    2017-01-01

    The purpose of this study was to determine if using a multistage approach for the empirical selection of anchor items would lead to more accurate DIF detection rates than the anchor selection methods proposed by Kopf, Zeileis, & Strobl (2015b). A simulation study was conducted in which the sample size, percentage of DIF, and balance of DIF…

  8. Resource selection by Indiana bats during the maternity season

    Treesearch

    Kathryn M. Womack; Sybill K. Amelon; Frank R. Thompson

    2013-01-01

    Little information exists on resource selection by foraging Indiana bats (Myotis sodalis) during the maternity season. Existing studies are based on modest sample sizes because of the rarity of this endangered species and the difficulty of radio-tracking bats. Our objectives were to determine resource selection by foraging Indiana bats during the maternity season and...

  9. Assessment of Users Information Needs and Satisfaction in Selected Seminary Libraries in Oyo State, Nigeria

    ERIC Educational Resources Information Center

    Adekunjo, Olalekan Abraham; Adepoju, Samuel Olusegun; Adeola, Anuoluwapo Odebunmi

    2015-01-01

    The study assessed users' information needs and satisfaction in selected seminary libraries in Oyo State, Nigeria. This paper employed the descriptive survey research design, whereby the expost-facto was employed with a sample size of three hundred (300) participants, selected from six seminaries located in Ibadan, Oyo and Ogbomoso, all in Oyo…

  10. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    PubMed

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  11. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  12. Selectivity evaluation for two experimental gill-net configurations used to sample Lake Erie walleyes

    USGS Publications Warehouse

    Vandergoot, Christopher S.; Kocovsky, Patrick M.; Brenden, Travis O.; Liu, Weihai

    2011-01-01

    We used length frequencies of captured walleyes Sander vitreus to indirectly estimate and compare selectivity between two experimental gill-net configurations used to sample fish in Lake Erie: (1) a multifilament configuration currently used by the Ohio Department of Natural Resources (ODNR) with stretched-measure mesh sizes ranging from 51 to 127 mm and a constant filament diameter (0.37 mm); and (2) a monofilament configuration with mesh sizes ranging from 38 to 178 mm and varying filament diameter (range = 0.20–0.33 mm). Paired sampling with the two configurations revealed that the catch of walleyes smaller than 250 mm and larger than 600 mm was greater in the monofilament configuration than in the multifilament configuration, but the catch of 250–600-mm fish was greater in the multifilament configuration. Binormal selectivity functions yielded the best fit to observed walleye catches for both gill-net configurations based on model deviances. Incorporation of deviation terms in the binormal selectivity functions (i.e., to relax the assumption of geometric similarity) further improved the fit to observed catches. The final fitted selectivity functions produced results similar to those from the length-based catch comparisons: the monofilament configuration had greater selectivity for small and large walleyes and the multifilament configuration had greater selectivity for mid-sized walleyes. Computer simulations that incorporated the fitted binormal selectivity functions indicated that both nets were likely to result in some bias in age composition estimates and that the degree of bias would ultimately be determined by the underlying condition, mortality rate, and growth rate of the Lake Erie walleye population. Before the ODNR switches its survey gear, additional comparisons of the different gill-net configurations, such as fishing the net pairs across a greater range of depths and at more locations in the lake, should be conducted to maintain congruence in the fishery-independent survey time series.

  13. Herbivorous insect response to group selection cutting in a southeastern bottomland hardwood forest

    Treesearch

    Michael D. Ulyshen; James L. Hanula; Scott Horn; John C. Kilgo; Christopher E. Moorman

    2005-01-01

    Malaise and pitfall traps were used to sample herbivorous insects in canopy gaps created by group-selection cutting in a bottomland hardwood forest in South Carolina. The traps were placed at the centers, edges, and in the forest adjacent to gaps of different sizes (0.13, 0.26, and 0.50 ha) and ages (1 and 7 yr old) during four sampling periods in 2001. Overall, the...

  14. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis.

    PubMed

    Ozçift, Akin

    2011-05-01

    Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Recent advances of mesoporous materials in sample preparation.

    PubMed

    Zhao, Liang; Qin, Hongqiang; Wu, Ren'an; Zou, Hanfa

    2012-03-09

    Sample preparation has been playing an important role in the analysis of complex samples. Mesoporous materials as the promising adsorbents have gained increasing research interest in sample preparation due to their desirable characteristics of high surface area, large pore volume, tunable mesoporous channels with well defined pore-size distribution, controllable wall composition, as well as modifiable surface properties. The aim of this paper is to review the recent advances of mesoporous materials in sample preparation with emphases on extraction of metal ions, adsorption of organic compounds, size selective enrichment of peptides/proteins, specific capture of post-translational peptides/proteins and enzymatic reactor for protein digestion. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Sampling for Global Epidemic Models and the Topology of an International Airport Network

    PubMed Central

    Bobashev, Georgiy; Morris, Robert J.; Goedecke, D. Michael

    2008-01-01

    Mathematical models that describe the global spread of infectious diseases such as influenza, severe acute respiratory syndrome (SARS), and tuberculosis (TB) often consider a sample of international airports as a network supporting disease spread. However, there is no consensus on how many cities should be selected or on how to select those cities. Using airport flight data that commercial airlines reported to the Official Airline Guide (OAG) in 2000, we have examined the network characteristics of network samples obtained under different selection rules. In addition, we have examined different size samples based on largest flight volume and largest metropolitan populations. We have shown that although the bias in network characteristics increases with the reduction of the sample size, a relatively small number of areas that includes the largest airports, the largest cities, the most-connected cities, and the most central cities is enough to describe the dynamics of the global spread of influenza. The analysis suggests that a relatively small number of cities (around 200 or 300 out of almost 3000) can capture enough network information to adequately describe the global spread of a disease such as influenza. Weak traffic flows between small airports can contribute to noise and mask other means of spread such as the ground transportation. PMID:18776932

  17. Determination of sample size for higher volatile data using new framework of Box-Jenkins model with GARCH: A case study on gold price

    NASA Astrophysics Data System (ADS)

    Roslindar Yaziz, Siti; Zakaria, Roslinazairimah; Hura Ahmad, Maizah

    2017-09-01

    The model of Box-Jenkins - GARCH has been shown to be a promising tool for forecasting higher volatile time series. In this study, the framework of determining the optimal sample size using Box-Jenkins model with GARCH is proposed for practical application in analysing and forecasting higher volatile data. The proposed framework is employed to daily world gold price series from year 1971 to 2013. The data is divided into 12 different sample sizes (from 30 to 10200). Each sample is tested using different combination of the hybrid Box-Jenkins - GARCH model. Our study shows that the optimal sample size to forecast gold price using the framework of the hybrid model is 1250 data of 5-year sample. Hence, the empirical results of model selection criteria and 1-step-ahead forecasting evaluations suggest that the latest 12.25% (5-year data) of 10200 data is sufficient enough to be employed in the model of Box-Jenkins - GARCH with similar forecasting performance as by using 41-year data.

  18. Visual search by chimpanzees (Pan): assessment of controlling relations.

    PubMed Central

    Tomonaga, M

    1995-01-01

    Three experimentally sophisticated chimpanzees (Pan), Akira, Chloe, and Ai, were trained on visual search performance using a modified multiple-alternative matching-to-sample task in which a sample stimulus was followed by the search display containing one target identical to the sample and several uniform distractors (i.e., negative comparison stimuli were identical to each other). After they acquired this task, they were tested for transfer of visual search performance to trials in which the sample was not followed by the uniform search display (odd-item search). Akira showed positive transfer of visual search performance to odd-item search even when the display size (the number of stimulus items in the search display) was small, whereas Chloe and Ai showed a transfer only when the display size was large. Chloe and Ai used some nonrelational cues such as perceptual isolation of the target among uniform distractors (so-called pop-out). In addition to the odd-item search test, various types of probe trials were presented to clarify the controlling relations in multiple-alternative matching to sample. Akira showed a decrement of accuracy as a function of the display size when the search display was nonuniform (i.e., each "distractor" stimulus was not the same), whereas Chloe and Ai showed perfect performance. Furthermore, when the sample was identical to the uniform distractors in the search display, Chloe and Ai never selected an odd-item target, but Akira selected it when the display size was large. These results indicated that Akira's behavior was controlled mainly by relational cues of target-distractor oddity, whereas an identity relation between the sample and the target strongly controlled the performance of Chloe and Ai. PMID:7714449

  19. Local Feature Selection for Data Classification.

    PubMed

    Armanfard, Narges; Reilly, James P; Komeili, Majid

    2016-06-01

    Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.

  20. A Monte-Carlo simulation analysis for evaluating the severity distribution functions (SDFs) calibration methodology and determining the minimum sample-size requirements.

    PubMed

    Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique

    2017-01-01

    Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Freeze-frame fruit selection by birds

    USGS Publications Warehouse

    Foster, Mercedes S.

    2008-01-01

    The choice of fruits by an avian frugivore is affected by choices it makes at multiple hierarchical levels (e.g., species of fruit, individual tree, individual fruit). Factors that influence those choices vary among levels in the hierarchy and include characteristics of the environment, the tree, and the fruit itself. Feeding experiments with wild-caught birds were conducted at El Tirol, Departamento de Itapua, Paraguay to test whether birds were selecting among individual fruits based on fruit size. Feeding on larger fruits, which have proportionally more pulp, is generally more efficient than feeding on small fruits. In trials (n = 56) with seven species of birds in four families, birds selected larger fruits 86% of the time. However, in only six instances were size differences significant, which is likely a reflection of small sample sizes.

  2. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Prospective Evaluation of Intraprostatic Inflammation and Focal Atrophy as a Predictor of Risk of High-Grade Prostate Cancer and Recurrence after Prostatectomy

    DTIC Science & Technology

    2014-07-01

    the two trials. The expected sample size for this work was 100 cases and 200 controls. Tissue was sufficient for 291 of the men (Task 2 completed in...not collected in SELECT), physical activity (PCPT [not collected in SELECT), cigarette smoking status at randomization (SELECT), use of aspirin

  4. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  5. Measurements of Regolith Simulant Thermal Conductivity Under Asteroid and Mars Surface Conditions

    NASA Astrophysics Data System (ADS)

    Ryan, A. J.; Christensen, P. R.

    2017-12-01

    Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under rough to high vacuum and across a wide range of temperatures. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and eventually cementation. We present the experimental data and model results for a suite of samples that were selected to isolate and address regolith physical parameters that affect bulk conductivity. Spherical glass beads of various sizes were used to measure the effect of size frequency distribution. Spherical beads of polypropylene and well-rounded quartz sand have respectively lower and higher solid phase thermal conductivities than the glass beads and thus provide the opportunity to test the sensitivity of bulk conductivity to differences in solid phase conductivity. Gas pressure in our asteroid experimental chambers is held at 10^-6 torr, which is sufficient to negate gas thermal conduction in even our coarsest of samples. On Mars, the atmospheric pressure is such that the mean free path of the gas molecules is comparable to the pore size for many regolith particulates. Thus, subtle variations in pore size and/or atmospheric pressure can produce large changes in bulk regolith conductivity. For each sample measured in our martian environmental chamber, we repeat thermal measurement runs at multiple pressures to observe this behavior. Finally, we present conductivity measurements of angular basaltic simulant that is physically analogous to sand and gravel that may be present on Bennu. This simulant was used for OSIRIS-REx TAGSAM Sample Return Arm engineering tests. We measure the original size frequency distribution as well as several sorted size fractions. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe for the spacecraft and meets grain size requirements for sampling.

  6. Variation in aluminum, iron, and particle concentrations in oxic groundwater samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    NASA Astrophysics Data System (ADS)

    Szabo, Zoltan; Oden, Jeannette H.; Gibs, Jacob; Rice, Donald E.; Ding, Yuan

    2002-02-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering.

  7. Using size-selected gold clusters on graphene oxide films to aid cryo-transmission electron tomography alignment

    PubMed Central

    Arkill, Kenton P.; Mantell, Judith M.; Plant, Simon R.; Verkade, Paul; Palmer, Richard E.

    2015-01-01

    A three-dimensional reconstruction of a nano-scale aqueous object can be achieved by taking a series of transmission electron micrographs tilted at different angles in vitreous ice: cryo-Transmission Electron Tomography. Presented here is a novel method of fine alignment for the tilt series. Size-selected gold clusters of ~2.7 nm (Au561 ± 14), ~3.2 nm (Au923 ± 22), and ~4.3 nm (Au2057 ± 45) in diameter were deposited onto separate graphene oxide films overlaying holes on amorphous carbon grids. After plunge freezing and subsequent transfer to cryo-Transmission Electron Tomography, the resulting tomograms have excellent (de-)focus and alignment properties during automatic acquisition. Fine alignment is accurate when the evenly distributed 3.2 nm gold particles are used as fiducial markers, demonstrated with a reconstruction of a tobacco mosaic virus. Using a graphene oxide film means the fiducial markers are not interfering with the ice bound sample and that automated collection is consistent. The use of pre-deposited size-selected clusters means there is no aggregation and a user defined concentration. The size-selected clusters are mono-dispersed and can be produced in a wide size range including 2–5 nm in diameter. The use of size-selected clusters on a graphene oxide films represents a significant technical advance for 3D cryo-electron microscopy. PMID:25783049

  8. Native microflora in fresh-cut processing plants and their potentials of biofilm formation

    USDA-ARS?s Scientific Manuscript database

    Representative food contact and non-food contact surfaces in two mid-sized fresh cut processing facilities were sampled for microbiological analyses post routine daily sanitization. Mesophilic and psychrotrophic bacteria on the sampled surfaces were isolated by plating on non-selective bacterial med...

  9. An Analysis of Methods Used to Examine Gender Differences in Computer-Related Behavior.

    ERIC Educational Resources Information Center

    Kay, Robin

    1992-01-01

    Review of research investigating gender differences in computer-related behavior examines statistical and methodological flaws. Issues addressed include sample selection, sample size, scale development, scale quality, the use of univariate and multivariate analyses, regressional analysis, construct definition, construct testing, and the…

  10. [The research protocol III. Study population].

    PubMed

    Arias-Gómez, Jesús; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe

    2016-01-01

    The study population is defined as a set of cases, determined, limited, and accessible, that will constitute the subjects for the selection of the sample, and must fulfill several characteristics and distinct criteria. The objectives of this manuscript are focused on specifying each one of the elements required to make the selection of the participants of a research project, during the elaboration of the protocol, including the concepts of study population, sample, selection criteria and sampling methods. After delineating the study population, the researcher must specify the criteria that each participant has to comply. The criteria that include the specific characteristics are denominated selection or eligibility criteria. These criteria are inclusion, exclusion and elimination, and will delineate the eligible population. The sampling methods are divided in two large groups: 1) probabilistic or random sampling and 2) non-probabilistic sampling. The difference lies in the employment of statistical methods to select the subjects. In every research, it is necessary to establish at the beginning the specific number of participants to be included to achieve the objectives of the study. This number is the sample size, and can be calculated or estimated with mathematical formulas and statistic software.

  11. Size separation of analytes using monomeric surfactants

    DOEpatents

    Yeung, Edward S.; Wei, Wei

    2005-04-12

    A sieving medium for use in the separation of analytes in a sample containing at least one such analyte comprises a monomeric non-ionic surfactant of the of the general formula, B-A, wherein A is a hydrophilic moiety and B is a hydrophobic moiety, present in a solvent at a concentration forming a self-assembled micelle configuration under selected conditions and having an aggregation number providing an equivalent weight capable of effecting the size separation of the sample solution so as to resolve a target analyte(s) in a solution containing the same, the size separation taking place in a chromatography or electrophoresis separation system.

  12. Satisfaction with social networks: an examination of socioemotional selectivity theory across cohorts.

    PubMed

    Lansford, J E; Sherman, A M; Antonucci, T C

    1998-12-01

    This study examines L. L. Carstensen's (1993, 1995) socioemotional selectivity theory within and across three cohorts spanning 4 decades. Socioemotional selectivity theory predicts that as individuals age, they narrow their social networks to devote more emotional resources to fewer relationships with close friends and family. Data from 3 cohorts of nationally representative samples were analyzed to determine whether respondents' satisfaction with the size of their social networks differed by age, cohort, or both. Results support socioemotional selectivity theory: More older adults than younger adults were satisfied with the current size of their social networks rather than wanting larger networks. These findings are consistent across all cohorts. Results are discussed with respect to social relationships across the life course.

  13. Heritabilities of measured and mid-infrared predicted milk fat globule size, milk fat and protein percentages, and their genetic correlations.

    PubMed

    Fleming, A; Schenkel, F S; Koeck, A; Malchiodi, F; Ali, R A; Corredig, M; Mallard, B; Sargolzaei, M; Miglior, F

    2017-05-01

    The objective of this study was to estimate the heritability of milk fat globule (MFG) size and mid-infrared (MIR) predicted MFG size in Holstein cattle. The genetic correlations between measured and predicted MFG size with milk fat and protein percentage were also investigated. Average MFG size was measured in 1,583 milk samples taken from 254 Holstein cows from 29 herds across Canada. Size was expressed as volume moment mean (D[4,3]) and surface moment mean (D[3,2]). Analyzed milk samples also had average MFG size predicted from their MIR spectral records. Fat and protein percentages were obtained for all test-day milk samples in the cow's lactation. Univariate and bivariate repeatability animal models were used to estimate heritability and genetic correlations. Moderate heritabilities of 0.364 and 0.466 were found for D[4,3] and D[3,2], respectively, and a strong genetic correlation was found between the 2 traits (0.98). The heritabilities for the MIR-predicted MFG size were lower than those estimated for the measured MFG size at 0.300 for predicted D[4,3] and 0.239 for predicted D[3,2]. The genetic correlation between measured and predicted D[4,3] was 0.685; the correlation was slightly higher between measured and predicted D[3,2] at 0.764, likely due to the better prediction accuracy of D[3,2]. Milk fat percentage had moderate genetic correlations with both D[4,3] and D[3,2] (0.538 and 0.681, respectively). The genetic correlation between predicted MFG size and fat percentage was much stronger (greater than 0.97 for both predicted D[4,3] and D[3,2]). The stronger correlation suggests a limitation for the use of the predicted values of MFG size as indicator traits for true average MFG size in milk in selection programs. Larger samples sizes are required to provide better evidence of the estimated genetic parameters. A genetic component appears to exist for the average MFG size in bovine milk, and the variation could be exploited in selection programs. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.B.; Cushing, K.M.; Johnson, J.W.

    1982-05-01

    The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less

  15. Geochemical and radiological characterization of soils from former radium processing sites

    USGS Publications Warehouse

    Landa, E.R.

    1984-01-01

    Soil samples were collected from former radium processing sites in Denver, CO, and East Orange, NJ. Particle-size separations and radiochemical analyses of selected samples showed that while the greatest contents of both 226Ra and U were generally found in the finest (< 45 ??m) fraction, the pattern was not always of progressive increase in radionuclide content with decreasing particle size. Leaching tests on these samples showed a large portion of the 225Ra and U to be soluble in dilute hydrochloric acid. Radon-emanation coefficients measured for bulk samples of contaminated soil were about 20%. Recovery of residual uranium and vanadium, as an adjunct to any remedial action program, appears unlikely due to economic considerations.

  16. Employee Engagement and Performance of Lecturers in Nigerian Tertiary Institutions

    ERIC Educational Resources Information Center

    Agbionu, Uchenna Clementina; Anyalor, Maureen; Nwali, Anthony Chukwuma

    2018-01-01

    The study investigated employee engagement and performance of lecturers in Nigerian Tertiary Institutions. It employed descriptive and correlation research designs. Stratified random sampling was used to select three tertiary institutions in Nigeria and the sample size of 314 lecturers was obtained through Taro Yamane. Questionnaires were…

  17. A Powder Delivery System (PoDS) for Mars in situ Science

    NASA Astrophysics Data System (ADS)

    Bryson, C.; Blake, D.; Saha, C.; Sarrazin, P.

    2004-12-01

    Many instruments proposed for in situ Mars science investigations work best with fine-grained samples of rocks or soils. Such instruments include the mineral analyzer CheMin [1] and any instrument that requires samples having high surface areas (e.g., mass spectrometers, organic analyzers, etc). The Powder Delivery System (PoDS) is designed to deliver powders of selected grain sizes from a sample acquisition device such as an arm-deployed robotic driller or corer to an instrument suite located on the body of a rover/lander. PoDS is capable of size-selective sampling of crushed rocks, soil or drill powder for delivery to instruments that require specific grain sizes (e.g. 5-50 mg of less than150 micron powder for CheMin). Sample material is transported as an aerosol of particles and gas by vacuum advection. In the laboratory a venturi pump driven by compressed air provides the impulse. On Mars, the ambient atmosphere is a source of CO2 that can be captured and compressed by adsorption pumping during diurnal temperature cycling [2]. The lower atmospheric pressure on the surface of Mars (7 torr) will affect fundamental parameters of gas-particle interaction such as Reynolds, Stocks and Knudsen numbers [3]. However, calculations show that the PoDS will operate under both Martian and terrestrial atmospheric conditions. Cyclone separators with appropriate particle size selection ranges remove particles from the aerosol stream. The vortex flow inside the cyclone causes grains larger than a specific size to be collected, while smaller grains remain entrained in the gas. Cyclones are very efficient inertial and centrifugal particle separators with cut sizes (d50) as low as 4 microns. Depending on the particle size ranges desired, a series of cyclones with descending cut sizes may be used, the simplest case being a single cyclone for particle deposition without mass separation. Transmission / membrane filters of appropriate pore sizes may also be used to collect powder from the aerosol stream. Results of a number of tests of the prototype PoDS will be presented. [1] Blake D. F., Sarrazin P., Bish D. L., Feldman S., Chipera S. J, Vaniman D.T., and Collins S., 2004, Definitive Mineralogical Analysis of Mars Analog Rocks Using the CheMin XRD/XRF Instrument, LPSC XXXV abstr. #1794 (CD-ROM). [2] Finn J. E., McKay C. P. and Sridhar R. K., 1999, Martian Atmosphere Utilization by Temperature-Swing Adsorption, University of Arizona, Publication No.961597, http://stl.ame.arizona.edu/publications/961597.pdf [3] Hinds W. C., 1999, Aerosol Technology - Properties, Behavior, and Measurement of Airborne Particles, Second edition, John Wiley & Sons, Inc., pp 15-67, 111-136.

  18. Factors Affecting the Adoption of R&D Project Selection Techniques at the Air Force Wright Aeronautical Laboratories

    DTIC Science & Technology

    1988-09-01

    tested. To measure 42 the adequacy of the sample, the Kaiser - Meyer - Olkin measure of sampling adequacy was used. This technique is described in Factor...40 4- 0 - 7 0 0 07 -58d the relatively large number of variables, there was concern about the adequacy of the sample size. A Kaiser - Meyer - Olkin

  19. Simulation techniques for estimating error in the classification of normal patterns

    NASA Technical Reports Server (NTRS)

    Whitsitt, S. J.; Landgrebe, D. A.

    1974-01-01

    Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.

  20. Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.

    PubMed

    Gupte, M D; Narasimhamurthy, B

    1999-06-01

    In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that after validation under field conditions, this method can be considered for a rapid assessment of the leprosy situation.

  1. Size-selective separation of submicron particles in suspensions with ultrasonic atomization.

    PubMed

    Nii, Susumu; Oka, Naoyoshi

    2014-11-01

    Aqueous suspensions containing silica or polystyrene latex were ultrasonically atomized for separating particles of a specific size. With the help of a fog involving fine liquid droplets with a narrow size distribution, submicron particles in a limited size-range were successfully separated from suspensions. Performance of the separation was characterized by analyzing the size and the concentration of collected particles with a high resolution method. Irradiation of 2.4MHz ultrasound to sample suspensions allowed the separation of particles of specific size from 90 to 320nm without regarding the type of material. Addition of a small amount of nonionic surfactant, PONPE20 to SiO2 suspensions enhanced the collection of finer particles, and achieved a remarkable increase in the number of collected particles. Degassing of the sample suspension resulted in eliminating the separation performance. Dissolved air in suspensions plays an important role in this separation. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. An Evaluation of Sharp Cut Cyclones for Sampling Diesel Particulate Matter Aerosol in the Presence of Respirable Dust

    PubMed Central

    Cauda, Emanuele; Sheehan, Maura; Gussman, Robert; Kenny, Lee; Volkwein, Jon

    2015-01-01

    Two prototype cyclones were the subjects of a comparative research campaign with a diesel particulate matter sampler (DPMS) that consists of a respirable cyclone combined with a downstream impactor. The DPMS is currently used in mining environments to separate dust from the diesel particulate matter and to avoid interferences in the analysis of integrated samples and direct-reading monitoring in occupational environments. The sampling characteristics of all three devices were compared using ammonium fluorescein, diesel, and coal dust aerosols. With solid spherical test aerosols at low particle loadings, the aerodynamic size-selection characteristics of all three devices were found to be similar, with 50% penetration efficiencies (d50) close to the design value of 0.8 µm, as required by the US Mine Safety and Health Administration for monitoring occupational exposure to diesel particulate matter in US mining operations. The prototype cyclones were shown to have ‘sharp cut’ size-selection characteristics that equaled or exceeded the sharpness of the DPMS. The penetration of diesel aerosols was optimal for all three samplers, while the results of the tests with coal dust induced the exclusion of one of the prototypes from subsequent testing. The sampling characteristics of the remaining prototype sharp cut cyclone (SCC) and the DPMS were tested with different loading of coal dust. While the characteristics of the SCC remained constant, the deposited respirable coal dust particles altered the size-selection performance of the currently used sampler. This study demonstrates that the SCC performed better overall than the DPMS. PMID:25060240

  3. Towards well-defined gold nanomaterials via diafiltration and aptamer mediated synthesis

    NASA Astrophysics Data System (ADS)

    Sweeney, Scott Francis

    Gold nanoparticles have garnered recent attention due to their intriguing size- and shape-dependent properties. Routine access to well-defined gold nanoparticle samples in terms of core diameter, shape, peripheral functionality and purity is required in order to carry out fundamental studies of their properties and to utilize these properties in future applications. For this reason, the development of methods for preparing well-defined gold nanoparticle samples remains an area of active research in materials science. In this dissertation, two methods, diafiltration and aptamer mediated synthesis, are explored as possible routes towards well-defined gold nanoparticle samples. It is shown that diafiltration has considerable potential for the efficient and convenient purification and size separation of water-soluble nanoparticles. The suitability of diafiltration for (i) the purification of water-soluble gold nanoparticles, (ii) the separation of a bimodal distribution of nanoparticles into fractions, (iii) the fractionation of a polydisperse sample and (iv) the isolation of [rimers from monomers and aggregates is studied. NMR, thermogravimetric analysis (TGA), and X-ray photoelectron spectroscopy (XPS) measurements demonstrate that diafiltration produces highly pure nanoparticles. UV-visible spectroscopic and transmission electron microscopic analyses show that diafiltration offers the ability to separate nanoparticles of disparate core size, including linked nanoparticles. These results demonstrate the applicability of diafiltration for the rapid and green preparation of high-purity gold nanoparticle samples and the size separation of heterogeneous nanoparticle samples. In the second half of the dissertation, the identification of materials specific aptamers and their use to synthesize shaped gold nanoparticles is explored. The use of in vitro selection for identifying materials specific peptide and oligonucleotide aptamers is reviewed, outlining the specific requirements of in vitro selection for materials and the ways in which the field can be advanced. A promising new technique, in vitro selection on surfaces (ISOS), is developed and the discovery using ISOS of RNA aptamers that bind to evaporated gold is discussed. Analysis of the isolated gold binding RNA aptamers indicates that they are highly structured with single-stranded polyadenosine binding motifs. These aptamers, and similarly isolated peptide aptamers, are briefly explored for their ability to synthesize gold nanoparticles. This dissertation contains both previously published and unpublished co-authored material.

  4. Operationalizing hippocampal volume as an enrichment biomarker for amnestic MCI trials: effect of algorithm, test-retest variability and cut-point on trial cost, duration and sample size

    PubMed Central

    Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.

    2014-01-01

    Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008

  5. Revealing strong bias in common measures of galaxy properties using new inclination-independent structures

    NASA Astrophysics Data System (ADS)

    Devour, Brian M.; Bell, Eric F.

    2017-06-01

    Accurate measurement of galaxy structures is a prerequisite for quantitative investigation of galaxy properties or evolution. Yet, the impact of galaxy inclination and dust on commonly used metrics of galaxy structure is poorly quantified. We use infrared data sets to select inclination-independent samples of disc and flattened elliptical galaxies. These samples show strong variation in Sérsic index, concentration and half-light radii with inclination. We develop novel inclination-independent galaxy structures by collapsing the light distribution in the near-infrared on to the major axis, yielding inclination-independent 'linear' measures of size and concentration. With these new metrics we select a sample of Milky Way analogue galaxies with similar stellar masses, star formation rates, sizes and concentrations. Optical luminosities, light distributions and spectral properties are all found to vary strongly with inclination: When inclining to edge-on, r-band luminosities dim by >1 magnitude, sizes decrease by a factor of 2, 'dust-corrected' estimates of star formation rate drop threefold, metallicities decrease by 0.1 dex and edge-on galaxies are half as likely to be classified as star forming. These systematic effects should be accounted for in analyses of galaxy properties.

  6. Correlation between standard Charpy and sub-size Charpy test results of selected steels in upper shelf region

    NASA Astrophysics Data System (ADS)

    Konopík, P.; Džugan, J.; Bucki, T.; Rzepa, S.; Rund, M.; Procházka, R.

    2017-02-01

    Absorbed energy obtained from impact Charpy tests is one of the most important values in many applications, for example in residual lifetime assessment of components in service. Minimal absorbed energy is often the value crucial for extending components service life, e.g. turbines, boilers and steam lines. Using a portable electric discharge sampling equipment (EDSE), it is possible to sample experimental material non-destructively and subsequently produce mini-Charpy specimens. This paper presents a new approach in correlation from sub-size to standard Charpy test results.

  7. [Size structure, selectivity and specific composition of the catch in traps for marine fish in the Gulf of California].

    PubMed

    Nevárez-Martínez, Manuel O; Balmori-Ramírez, Alejandro; Miranda-Mier, Everardo; Santos-Molina, J Pablo; Méndez-Tenorio, Francisco J; Cervantes-Valle, Celio

    2008-09-01

    We analyzed the performance of three traps for marine fish between October 2005 and August 2006 in the Gulf of California, Mexico. The performance was measured as difference in selectivity, fish diversity, size structure and yield. The samples were collected with quadrangular traps 90 cm wide, 120 cm long and 50 cm high. Trap type 1 had a 5 x 5 cm mesh (type 2: 5 x 5 cm including a rear panel of 5 x 10 cm; trap 3: 5 x 10 cm). Most abundant in our traps were: Goldspotted sand bass (Paralabrax auroguttatus), Ocean whitefish (Caulolatilus princeps), Spotted sand bass (P. maculatofaciatus) and Bighead tilefish (C. affinis); there was no bycatch. The number offish per trap per haul decreased when mesh size was increased. We also observed a direct relationship between mesh size and average fish length. By comparing our traps with the authorized fishing gear (hooks-and-line) we found that the size structure is larger in traps. Traps with larger mesh size were more selective. Consequently, we recommend adding traps to hooks-and-line as authorized fishing gear in the small scale fisheries of the Sonora coast, Mexico.

  8. Food selection and feeding relationships of yellow perch 'Perca flavescens' (mitchell), white bass 'Morone chrysops' (rafinesque), freshwater drum 'Aplodinotus grunniens' (rafinesque), and goldfish 'Carassius auratus' (linneaus) in western Lake Erie. Interim report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenaga, D.E.; Cole, R.A.

    1975-10-01

    The study was undertaken as part of an investigation of the impact of once through cooling at a large power plant in western Lake Erie and is an attempt to assess the relationship among fish based on foods consumed. Potential food organisms and stomach contents of yellow perch, white bass, freshwater drum and goldfish were sampled and compared over a two year period. On the basis of differences in food size alone, young of the year fish did not appear to be in competition but as they became larger, all but goldfish consumed the same mean size foods. Within amore » fish species, mean prey size varied little in fish older than age class zero. Goldfish differed markedly by lacking the prey size selectivity demonstrated by the other fish species. Some ramifications of food size and prey selectivity in relation to trophic dynamics, feeding efficiency, composition and distribution of fish species, and the use of cooling water by large power plants and their possible impact upon prey sizes are discussed. (GRA)« less

  9. Selective Laser Melting of Metal Powder Of Steel 3161

    NASA Astrophysics Data System (ADS)

    Smelov, V. G.; Sotov, A. V.; Agapovichev, A. V.; Tomilina, T. M.

    2016-08-01

    In this article the results of experimental study of the structure and mechanical properties of materials obtained by selective laser melting (SLM), metal powder steel 316L was carried out. Before the process of cultivation of samples as the input control, the morphology of the surface of the powder particles was studied and particle size analysis was carried out. Also, 3D X-ray quality control of the grown samples was carried out in order to detect hidden defects, their qualitative and quantitative assessment. To determine the strength characteristics of the samples synthesized by the SLM method, static tensile tests were conducted. To determine the stress X-ray diffraction analysis was carried out in the material samples.

  10. A Scanning Transmission Electron Microscopy Method for Determining Manganese Composition in Welding Fume as a Function of Primary Particle Size

    PubMed Central

    Richman, Julie D.; Livi, Kenneth J.T.; Geyh, Alison S.

    2011-01-01

    Increasing evidence suggests that the physicochemical properties of inhaled nanoparticles influence the resulting toxicokinetics and toxicodynamics. This report presents a method using scanning transmission electron microscopy (STEM) to measure the Mn content throughout the primary particle size distribution of welding fume particle samples collected on filters for application in exposure and health research. Dark field images were collected to assess the primary particle size distribution and energy-dispersive X-ray and electron energy loss spectroscopy were performed for measurement of Mn composition as a function of primary particle size. A manual method incorporating imaging software was used to measure the primary particle diameter and to select an integration region for compositional analysis within primary particles throughout the size range. To explore the variation in the developed metric, the method was applied to 10 gas metal arc welding (GMAW) fume particle samples of mild steel that were collected under a variety of conditions. The range of Mn composition by particle size was −0.10 to 0.19 %/nm, where a positive estimate indicates greater relative abundance of Mn increasing with primary particle size and a negative estimate conversely indicates decreasing Mn content with size. However, the estimate was only statistically significant (p<0.05) in half of the samples (n=5), which all had a positive estimate. In the remaining samples, no significant trend was measured. Our findings indicate that the method is reproducible and that differences in the abundance of Mn by primary particle size among welding fume samples can be detected. PMID:21625364

  11. A Scanning Transmission Electron Microscopy Method for Determining Manganese Composition in Welding Fume as a Function of Primary Particle Size.

    PubMed

    Richman, Julie D; Livi, Kenneth J T; Geyh, Alison S

    2011-06-01

    Increasing evidence suggests that the physicochemical properties of inhaled nanoparticles influence the resulting toxicokinetics and toxicodynamics. This report presents a method using scanning transmission electron microscopy (STEM) to measure the Mn content throughout the primary particle size distribution of welding fume particle samples collected on filters for application in exposure and health research. Dark field images were collected to assess the primary particle size distribution and energy-dispersive X-ray and electron energy loss spectroscopy were performed for measurement of Mn composition as a function of primary particle size. A manual method incorporating imaging software was used to measure the primary particle diameter and to select an integration region for compositional analysis within primary particles throughout the size range. To explore the variation in the developed metric, the method was applied to 10 gas metal arc welding (GMAW) fume particle samples of mild steel that were collected under a variety of conditions. The range of Mn composition by particle size was -0.10 to 0.19 %/nm, where a positive estimate indicates greater relative abundance of Mn increasing with primary particle size and a negative estimate conversely indicates decreasing Mn content with size. However, the estimate was only statistically significant (p<0.05) in half of the samples (n=5), which all had a positive estimate. In the remaining samples, no significant trend was measured. Our findings indicate that the method is reproducible and that differences in the abundance of Mn by primary particle size among welding fume samples can be detected.

  12. The Impact of School Bullying on Students' Academic Achievement from Teachers Point of View

    ERIC Educational Resources Information Center

    Al-Raqqad, Hana Khaled; Al-Bourini, Eman Saeed; Al Talahin, Fatima Mohammad; Aranki, Raghda Michael Elias

    2017-01-01

    The study aimed to investigate school bullying impact on students' academic achievement from teachers' perspective in Jordanian schools. The study used a descriptive analytical methodology. The research sample consisted of all schools' teachers in Amman West Area (in Jordan). The sample size consisted of 200 teachers selected from different…

  13. Methods for measuring populations of small, diurnal forest birds.

    Treesearch

    D.A. Manuwal; A.B. Carey

    1991-01-01

    Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...

  14. Construction of the Examination Stress Scale for Adolescent Students

    ERIC Educational Resources Information Center

    Sung, Yao-Ting; Chao, Tzu-Yang

    2015-01-01

    The tools used for measuring examination stress have three main limitations: sample selected, sample sizes, and measurement contents. In this study, we constructed the Examination Stress Scale (ExamSS), and 4,717 high school students participated in this research. The results indicate that ExamSS has satisfactory reliability, construct validity,…

  15. Number of pins in two-stage stratified sampling for estimating herbage yield

    Treesearch

    William G. O' Regan; C. Eugene Conrad

    1975-01-01

    In a two-stage stratified procedure for sampling herbage yield, plots are stratified by a pin frame in stage one, and clipped. In stage two, clippings from selected plots are sorted, dried, and weighed. Sample size and distribution of plots between the two stages are determined by equations. A way to compute the effect of number of pins on the variance of estimated...

  16. Measurement and Analysis of Porosity in Al-10Si-1Mg Components Additively Manufactured by Selective Laser Melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Suraj; Cunningham, Ross; Ozturk, Tugce

    Aluminum alloys are candidate materials for weight critical applications because of their excellent strength and stiffness to weight ratio. However, defects such as voids decrease the strength and fatigue life of these alloys, which can limit the application of Selective Laser Melting. In this study, the average volume fraction, average size, and size distribution of pores in Al10-Si-1Mg samples built using Selective Laser Melting have been characterized. Synchrotron high energy X-rays were used to perform computed tomography on volumes of order one cubic millimeter with a resolution of approximately 1.5 μm. Substantial variations in the pore size distributions were foundmore » as a function of process conditions. Even under conditions that ensured that all locations were melted at least once, a significant number density was found of pores above 5 μm in diameter.« less

  17. Geochemical and radiological characterization of soils from former radium processing sites.

    PubMed

    Landa, E R

    1984-02-01

    Soil samples were collected from former radium processing sites in Denver, CO, and East Orange, NJ. Particle-size separations and radiochemical analyses of selected samples showed that while the greatest contents of both 226Ra and U were generally found in the finest (less than 45 micron) fraction, the pattern was not always of progressive increase in radionuclide content with decreasing particle size. Leaching tests on these samples showed a large portion of the 226Ra and U to be soluble in dilute hydrochloric acid. Radon-emanation coefficients measured for bulk samples of contaminated soil were about 20%. Recovery of residual uranium and vanadium, as an adjunct to any remedial action program, appears unlikely due to economic considerations.

  18. Scan for allele frequency differences from pooled samples in lines of pigs selected for components of litter size

    USDA-ARS?s Scientific Manuscript database

    Direct single trait selection within two seasonal replicates for 11 generations resulted in a 1.6 pig advantage for uterine capacity (UC) and a 3.0 advantage for ovulation rate (OR) compared to an unselected control (CO) population. Our objective was to gain insight and identify genetic loci impacte...

  19. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2015-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  20. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2016-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Jaejin; Woo, Jong-Hak; Mulchaey, John S.

    We perform a comprehensive study of X-ray cavities using a large sample of X-ray targets selected from the Chandra archive. The sample is selected to cover a large dynamic range including galaxy clusters, groups, and individual galaxies. Using β -modeling and unsharp masking techniques, we investigate the presence of X-ray cavities for 133 targets that have sufficient X-ray photons for analysis. We detect 148 X-ray cavities from 69 targets and measure their properties, including cavity size, angle, and distance from the center of the diffuse X-ray gas. We confirm the strong correlation between cavity size and distance from the X-raymore » center similar to previous studies. We find that the detection rates of X-ray cavities are similar among galaxy clusters, groups and individual galaxies, suggesting that the formation mechanism of X-ray cavities is independent of environment.« less

  2. The analysis of various size, visually selected and density and magnetically separated fractions of Luna 16 and 20 samples

    NASA Technical Reports Server (NTRS)

    Eglinton, G.; Gowar, A. P.; Jull, A. J. T.; Pillinger, C. T.; Agrell, S. O.; Agrell, J. E.; Long, J. V. P.; Bowie, S. H. U.; Simpson, P. R.; Beckinsale, R. D.

    1977-01-01

    Samples of Luna 16 and 20 have been separated according to size, visual appearance, density, and magnetic susceptibility. Selected aliquots were examined in eight British laboratories. The studies included mineralogy and petrology, selenochronology, magnetic characteristics, Mossbauer spectroscopy, oxygen isotope ratio determinations, cosmic ray track and thermoluminescence investigations, and carbon chemistry measurements. Luna 16 and 20 are typically mare and highland soils, comparing well with their Apollo counterparts, Apollo 11 and 16, respectively. Both soils are very mature (high free iron, carbide, and methane and cosmogenic Ar), while Luna 16, but not Luna 20, is characterized by a high content of glassy materials. An aliquot of anorthosite fragments, handpicked from Luna 20, had a gas retention age of about 4.3 plus or minus 0.1 Gy.

  3. Sampling design for the Study of Cardiovascular Risks in Adolescents (ERICA).

    PubMed

    Vasconcellos, Mauricio Teixeira Leite de; Silva, Pedro Luis do Nascimento; Szklo, Moyses; Kuschnir, Maria Cristina Caetano; Klein, Carlos Henrique; Abreu, Gabriela de Azevedo; Barufaldi, Laura Augusta; Bloch, Katia Vergetti

    2015-05-01

    The Study of Cardiovascular Risk in Adolescents (ERICA) aims to estimate the prevalence of cardiovascular risk factors and metabolic syndrome in adolescents (12-17 years) enrolled in public and private schools of the 273 municipalities with over 100,000 inhabitants in Brazil. The study population was stratified into 32 geographical strata (27 capitals and five sets with other municipalities in each macro-region of the country) and a sample of 1,251 schools was selected with probability proportional to size. In each school three combinations of shift (morning and afternoon) and grade were selected, and within each of these combinations, one class was selected. All eligible students in the selected classes were included in the study. The design sampling weights were calculated by the product of the reciprocals of the inclusion probabilities in each sampling stage, and were later calibrated considering the projections of the numbers of adolescents enrolled in schools located in the geographical strata by sex and age.

  4. Industrial Application of Valuable Materials Generated from PLK Rock-A Bauxite Mining Waste

    NASA Astrophysics Data System (ADS)

    Swain, Ranjita; Routray, Sunita; Mohapatra, Abhisek; Ranjan Patra, Biswa

    2018-03-01

    PLK rock classified in to two products after a selective grinding to a particular size fraction. PLK rocks ground to below 45-micron size which is followed by a classifier i.e. hydrocyclone. The ground product classified in to different sizes of apex and vortex finder. The pressure gauge was attached for the measurement of the pressure. The production of fines is also increasing with increase in the vortex finder diameter. In order to increase in the feed capacity of the hydrocyclone, the vortex finder 11.1 mm diameter and the spigot diameter 8.0 mm has been considered as the best optimum condition for recovery of fines from PLK rock sample. The overflow sample contains 5.39% iron oxide (Fe2O3) with 0.97% of TiO2 and underflow sample contains 1.87% Fe2O3 with 2.39% of TiO2. The cut point or separation size of overflow sample is 25 μm. The efficiency of separation, or the so-called imperfection I, is at 6 μm size. In this study, the iron oxide content in underflow sample is less than 2% which is suitable for making of refractory application. The overflow sample is very fine which can also be a raw material for ceramic industry as well as a cosmetic product.

  5. Re-electrospraying splash-landed proteins and nanoparticles.

    PubMed

    Benner, W Henry; Lewis, Gregory S; Hering, Susanne V; Selgelke, Brent; Corzett, Michelle; Evans, James E; Lightstone, Felice C

    2012-03-06

    FITC-albumin, Lsr-F, or fluorescent polystyrene latex particles were electrosprayed from aqueous buffer and subjected to dispersion by differential electrical mobility at atmospheric pressure. A resulting narrow size cut of singly charged molecular ions or particles was passed through a condensation growth tube collector to create a flow stream of small water droplets, each carrying a single ion or particle. The droplets were splash landed (impacted) onto a solid or liquid temperature controlled surface. Small pools of droplets containing size-selected particles, FITC-albumin, or Lsr-F were recovered, re-electrosprayed, and, when analyzed a second time by differential electrical mobility, showed increased homogeneity. Transmission electron microscopy (TEM) analysis of the size-selected Lsr-F sample corroborated the mobility observation.

  6. Infraocclusion: Dental development and associated dental variations in singletons and twins.

    PubMed

    Odeh, Ruba; Townsend, Grant; Mihailidis, Suzanna; Lähdesmäki, Raija; Hughes, Toby; Brook, Alan

    2015-09-01

    The aim of this study was to investigate the prevalence of selected dental variations in association with infraocclusion, as well as determining the effects of infraocclusion on dental development and tooth size, in singletons and twins. Two samples were analysed. The first sample comprised 1454 panoramic radiographs of singleton boys and girls aged 8-11 years. The second sample comprised dental models of 202 pairs of monozygotic and dizygotic twins aged 8-11 years. Adobe Photoshop CS5 was used to construct reference lines and measure the extent of infraocclusion (in mm) of primary molars on the panoramic radiographs and on 2D images obtained from the dental models. The panoramic radiographs were examined for the presence of selected dental variations and to assess dental development following the Demirjian and Willems systems. The twins' dental models were measured to assess mesiodistal crown widths. In the singleton sample there was a significant association of canines in an altered position during eruption and the lateral incisor complex (agenesis and/or small tooth size) with infraocclusion (P<0.001), but there was no significant association between infraocclusion and agenesis of premolars. Dental age assessment revealed that dental development was delayed in individuals with infraocclusion compared to controls. The primary mandibular canines were significantly smaller in size in the infraoccluded group (P<0.05). The presence of other dental variations in association with infraocclusion, as well as delayed dental development and reduced tooth size, suggests the presence of a pleiotropic effect. The underlying aetiological factors may be genetic and/or epigenetic. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Technical Guidance for Conducting ASVAB Validation/Standards Studies in the U.S. Navy

    DTIC Science & Technology

    2015-02-01

    the criterion), we can compute the variance of X in the unrestricted group, 2xS , and in the restricted (selected) group, 2 xs . 3 In contrast, we...well as the selected group, 2 xs . We also know the variance of Y in the selected group, 2ys , and the correlation of X and Y in the selected...and AS. Five levels of selection ratio (1.0, .8, .6, .4, and .2) and eight sample sizes (50, 75, 100, 150, 225, 350 , 500, and 800) were considered

  8. Lack of size selectivity for paddlefish captured in hobbled gillnets

    USGS Publications Warehouse

    Scholten, G.D.; Bettoli, P.W.

    2007-01-01

    A commercial fishery for paddlefish Polyodon spathula caviar exists in Kentucky Lake, a reservoir on the lower Tennessee River. A 152-mm (bar-measure) minimum mesh size restriction on entanglement gear was enacted in 2002 and the minimum size limit was increased to 864 mm eye-fork length to reduce the possibility of recruitment overfishing. Paddlefish were sampled in 2003-2004 using experimental monofilament gillnets with panels of 89, 102, 127, 152, 178, and 203-mm meshes and the efficacy of the mesh size restriction was evaluated. Following the standards of commercial gear used in that fishery, nets were "hobbled" (i.e., 128 m ?? 3.6 m nets were tied down to 2.4 m; 91 m ?? 9.1 m nets were tied down to 7.6 m). The mean lengths of paddlefish (Ntotal = 576 fish) captured in each mesh were similar among most meshes and bycatch rates of sublegal fish did not vary with mesh size. Selectivity curves could not be modeled because the mean and modal lengths of fish captured in each mesh did not increase with mesh size. Ratios of fish girth to mesh perimeter (G:P) for individual fish were often less than 1.0 as a result of the largest meshes capturing small paddlefish. It is unclear whether lack of size selectivity for paddlefish was because the gillnets were hobbled, the unique morphology of paddlefish, or the fact that they swim with their mouths agape when filter feeding. The lack of size selectivity by hobbled gillnets fished in Kentucky Lake means that managers cannot influence the size of paddlefish captured by commercial gillnet gear by changing minimum mesh size regulations. ?? 2006 Elsevier B.V. All rights reserved.

  9. The Discovery of Single-Nucleotide Polymorphisms—and Inferences about Human Demographic History

    PubMed Central

    Wakeley, John; Nielsen, Rasmus; Liu-Cordero, Shau Neen; Ardlie, Kristin

    2001-01-01

    A method of historical inference that accounts for ascertainment bias is developed and applied to single-nucleotide polymorphism (SNP) data in humans. The data consist of 84 short fragments of the genome that were selected, from three recent SNP surveys, to contain at least two polymorphisms in their respective ascertainment samples and that were then fully resequenced in 47 globally distributed individuals. Ascertainment bias is the deviation, from what would be observed in a random sample, caused either by discovery of polymorphisms in small samples or by locus selection based on levels or patterns of polymorphism. The three SNP surveys from which the present data were derived differ both in their protocols for ascertainment and in the size of the samples used for discovery. We implemented a Monte Carlo maximum-likelihood method to fit a subdivided-population model that includes a possible change in effective size at some time in the past. Incorrectly assuming that ascertainment bias does not exist causes errors in inference, affecting both estimates of migration rates and historical changes in size. Migration rates are overestimated when ascertainment bias is ignored. However, the direction of error in inferences about changes in effective population size (whether the population is inferred to be shrinking or growing) depends on whether either the numbers of SNPs per fragment or the SNP-allele frequencies are analyzed. We use the abbreviation “SDL,” for “SNP-discovered locus,” in recognition of the genomic-discovery context of SNPs. When ascertainment bias is modeled fully, both the number of SNPs per SDL and their allele frequencies support a scenario of growth in effective size in the context of a subdivided population. If subdivision is ignored, however, the hypothesis of constant effective population size cannot be rejected. An important conclusion of this work is that, in demographic or other studies, SNP data are useful only to the extent that their ascertainment can be modeled. PMID:11704929

  10. Only pick the right grains: Modelling the bias due to subjective grain-size interval selection for chronometric and fingerprinting approaches.

    NASA Astrophysics Data System (ADS)

    Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian

    2016-04-01

    Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.

  11. Monitoring landscape metrics by point sampling: accuracy in estimating Shannon's diversity and edge density.

    PubMed

    Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran

    2010-05-01

    Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.

  12. Evaluation of Pump Pulsation in Respirable Size-Selective Sampling: Part II. Changes in Sampling Efficiency

    PubMed Central

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M.; Harper, Martin

    2015-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the DO cyclone. However, for three models of pumps producing 30%, 56%, and 70% pulsations, substantial changes were confirmed. The GK2.69 cyclone showed a similar pattern to that of the DO cyclone, i.e. no change in sampling efficiency for the Legacy producing 15% pulsation and a substantial change for the Elite12 producing 41% pulsation. Pulse shape did not cause any change in sampling efficiency when compared to the single sine wave. The findings suggest that 25% pulsation at the inlet of the cyclone as measured by this test can be acceptable for the respirable particle collection. If this test is used in place of that currently in European standards (EN 1232–1997 and EN 12919-1999) or is used in any International Organization for Standardization standard, then a 25% pulsation criterion could be adopted. This work suggests that a 10% criterion as currently specified in the European standards for testing may be overly restrictive and not able to be met by many pumps on the market. Further work is recommended to determine which criterion would be applicable to this test if it is to be retained in its current form. PMID:24064963

  13. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    PubMed

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the DO cyclone. However, for three models of pumps producing 30%, 56%, and 70% pulsations, substantial changes were confirmed. The GK2.69 cyclone showed a similar pattern to that of the DO cyclone, i.e. no change in sampling efficiency for the Legacy producing 15% pulsation and a substantial change for the Elite12 producing 41% pulsation. Pulse shape did not cause any change in sampling efficiency when compared to the single sine wave. The findings suggest that 25% pulsation at the inlet of the cyclone as measured by this test can be acceptable for the respirable particle collection. If this test is used in place of that currently in European standards (EN 1232-1997 and EN 12919-1999) or is used in any International Organization for Standardization standard, then a 25% pulsation criterion could be adopted. This work suggests that a 10% criterion as currently specified in the European standards for testing may be overly restrictive and not able to be met by many pumps on the market. Further work is recommended to determine which criterion would be applicable to this test if it is to be retained in its current form.

  14. SW-846 Test Method 3511: Organic Compounds in Water by Microextraction

    EPA Pesticide Factsheets

    a procedure for extracting selected volatile and semivolatileorganic compounds from water. The microscale approach minimizes sample size and solventusage, thereby reducing the supply costs, health and safety risks, and waste generated.

  15. School Principals' Leadership Behaviours and Its Relation with Teachers' Sense of Self-Efficacy

    ERIC Educational Resources Information Center

    Mehdinezhad, Vali; Mansouri, Masoumeh

    2016-01-01

    The aim of this study was to investigate the relationship between school principals' leadership behaviours and teachers' sense of self-efficacy. The research method was descriptive and correlational. A sample size of 254 teachers was simply selected randomly by proportional sampling. For data collection, the Teachers' Sense of Efficacy Scale of…

  16. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR FIELD USE OF THE PARTICULATE SAMPLER (UA-F-3.1)

    EPA Science Inventory

    The purpose of this SOP is to describe the in-field use of the particulate sampling system (pumping, control unit, and size selective inlet impactors) for collecting samples of particulate matter from the air during a predetermined time period during the Arizona NHEXAS project an...

  17. The Emotions of Socialization-Related Learning: Understanding Workplace Adaptation as a Learning Process.

    ERIC Educational Resources Information Center

    Reio, Thomas G., Jr.

    The influence of selected discrete emotions on socialization-related learning and perception of workplace adaptation was examined in an exploratory study. Data were collected from 233 service workers in 4 small and medium-sized companies in metropolitan Washington, D.C. The sample members' average age was 32.5 years, and the sample's racial makeup…

  18. Analyses of sweep-up, ejecta, and fallback material from the 4250 metric ton high explosive test ''MISTY PICTURE'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohletz, K.H.; Raymond, R. Jr.; Rawson, G.

    1988-01-01

    The MISTY PICTURE surface burst was detonated at the White Sands Missle range in May of 1987. The Los Alamos National Laboratory dust characterization program was expanded to help correlate and interrelate aspects of the overall MISTY PICTURE dust and ejecta characterization program. Pre-shot sampling of the test bed included composite samples from 15 to 75 m distance from Surface Ground Zero (SGZ) representing depths down to 2.5 m, interval samples from 15 to 25 m from SGZ representing depths down to 3m, and samples of surface material (top 0.5 cm) out to distances of 190 m from SGZ. Sweep-upmore » samples were collected in GREG/SNOB gages located within the DPR. All samples were dry-sieved between 8.0 mm and 0.045 mm (16 size fractures); selected samples were analyzed for fines by a contrifugal settling technique. The size distributions were analyzed using spectral decomposition based upon a sequential fragmentation model. Results suggest that the same particle size subpopulations are present in the ejecta, fallout, and sweep-up samples as are present in the pre-shot test bed. The particle size distribution in post-shot environments apparently can be modelled taking into account heterogeneities in the pre-shot test bed and dominant wind direction during and following the shot. 13 refs., 12 figs., 2 tabs.« less

  19. Conceptual data sampling for breast cancer histology image classification.

    PubMed

    Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir

    2017-10-01

    Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Selecting promising treatments in randomized Phase II cancer trials with an active control.

    PubMed

    Cheung, Ying Kuen

    2009-01-01

    The primary objective of Phase II cancer trials is to evaluate the potential efficacy of a new regimen in terms of its antitumor activity in a given type of cancer. Due to advances in oncology therapeutics and heterogeneity in the patient population, such evaluation can be interpreted objectively only in the presence of a prospective control group of an active standard treatment. This paper deals with the design problem of Phase II selection trials in which several experimental regimens are compared to an active control, with an objective to identify an experimental arm that is more effective than the control or to declare futility if no such treatment exists. Conducting a multi-arm randomized selection trial is a useful strategy to prioritize experimental treatments for further testing when many candidates are available, but the sample size required in such a trial with an active control could raise feasibility concerns. In this study, we extend the sequential probability ratio test for normal observations to the multi-arm selection setting. The proposed methods, allowing frequent interim monitoring, offer high likelihood of early trial termination, and as such enhance enrollment feasibility. The termination and selection criteria have closed form solutions and are easy to compute with respect to any given set of error constraints. The proposed methods are applied to design a selection trial in which combinations of sorafenib and erlotinib are compared to a control group in patients with non-small-cell lung cancer using a continuous endpoint of change in tumor size. The operating characteristics of the proposed methods are compared to that of a single-stage design via simulations: The sample size requirement is reduced substantially and is feasible at an early stage of drug development.

  1. Mutation supply and the repeatability of selection for antibiotic resistance

    NASA Astrophysics Data System (ADS)

    van Dijk, Thomas; Hwang, Sungmin; Krug, Joachim; de Visser, J. Arjan G. M.; Zwart, Mark P.

    2017-10-01

    Whether evolution can be predicted is a key question in evolutionary biology. Here we set out to better understand the repeatability of evolution, which is a necessary condition for predictability. We explored experimentally the effect of mutation supply and the strength of selective pressure on the repeatability of selection from standing genetic variation. Different sizes of mutant libraries of antibiotic resistance gene TEM-1 β-lactamase in Escherichia coli, generated by error-prone PCR, were subjected to different antibiotic concentrations. We determined whether populations went extinct or survived, and sequenced the TEM gene of the surviving populations. The distribution of mutations per allele in our mutant libraries followed a Poisson distribution. Extinction patterns could be explained by a simple stochastic model that assumed the sampling of beneficial mutations was key for survival. In most surviving populations, alleles containing at least one known large-effect beneficial mutation were present. These genotype data also support a model which only invokes sampling effects to describe the occurrence of alleles containing large-effect driver mutations. Hence, evolution is largely predictable given cursory knowledge of mutational fitness effects, the mutation rate and population size. There were no clear trends in the repeatability of selected mutants when we considered all mutations present. However, when only known large-effect mutations were considered, the outcome of selection is less repeatable for large libraries, in contrast to expectations. We show experimentally that alleles carrying multiple mutations selected from large libraries confer higher resistance levels relative to alleles with only a known large-effect mutation, suggesting that the scarcity of high-resistance alleles carrying multiple mutations may contribute to the decrease in repeatability at large library sizes.

  2. Experimental Design in Clinical 'Omics Biomarker Discovery.

    PubMed

    Forshed, Jenny

    2017-11-03

    This tutorial highlights some issues in the experimental design of clinical 'omics biomarker discovery, how to avoid bias and get as true quantities as possible from biochemical analyses, and how to select samples to improve the chance of answering the clinical question at issue. This includes the importance of defining clinical aim and end point, knowing the variability in the results, randomization of samples, sample size, statistical power, and how to avoid confounding factors by including clinical data in the sample selection, that is, how to avoid unpleasant surprises at the point of statistical analysis. The aim of this Tutorial is to help translational clinical and preclinical biomarker candidate research and to improve the validity and potential of future biomarker candidate findings.

  3. Nearest neighbor density ratio estimation for large-scale applications in astronomy

    NASA Astrophysics Data System (ADS)

    Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.

    2015-09-01

    In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.

  4. Overview of the Mars Sample Return Earth Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Dillman, Robert; Corliss, James

    2008-01-01

    NASA's Mars Sample Return (MSR) project will bring Mars surface and atmosphere samples back to Earth for detailed examination. Langley Research Center's MSR Earth Entry Vehicle (EEV) is a core part of the mission, protecting the sample container during atmospheric entry, descent, and landing. Planetary protection requirements demand a higher reliability from the EEV than for any previous planetary entry vehicle. An overview of the EEV design and preliminary analysis is presented, with a follow-on discussion of recommended future design trade studies to be performed over the next several years in support of an MSR launch in 2018 or 2020. Planned topics include vehicle size for impact protection of a range of sample container sizes, outer mold line changes to achieve surface sterilization during re-entry, micrometeoroid protection, aerodynamic stability, thermal protection, and structural materials selection.

  5. Genetic Mapping of Fixed Phenotypes: Disease Frequency as a Breed Characteristic

    PubMed Central

    Jones, Paul; Martin, Alan; Ostrander, Elaine A.; Lark, Karl G.

    2009-01-01

    Traits that have been stringently selected to conform to specific criteria in a closed population are phenotypic stereotypes. In dogs, Canis familiaris, such stereotypes have been produced by breeding for conformation, performance (behaviors), etc. We measured phenotypes on a representative sample to establish breed stereotypes. DNA samples from 147 dog breeds were used to characterize single nucleotide polymorphism allele frequencies for association mapping of breed stereotypes. We identified significant size loci (quantitative trait loci [QTLs]), implicating candidate genes appropriate to regulation of size (e.g., IGF1, IGF2BP2 SMAD2, etc.). Analysis of other morphological stereotypes, also under extreme selection, identified many additional significant loci. Behavioral loci for herding, pointing, and boldness implicated candidate genes appropriate to behavior (e.g., MC2R, DRD1, and PCDH9). Significant loci for longevity, a breed characteristic inversely correlated with breed size, were identified. The power of this approach to identify loci regulating the incidence of specific polygenic diseases is demonstrated by the association of a specific IGF1 haplotype with hip dysplasia, patella luxation, and pacreatitis. PMID:19321632

  6. Computerized adaptive testing: the capitalization on chance problem.

    PubMed

    Olea, Julio; Barrada, Juan Ramón; Abad, Francisco J; Ponsoda, Vicente; Cuevas, Lara

    2012-03-01

    This paper describes several simulation studies that examine the effects of capitalization on chance in the selection of items and the ability estimation in CAT, employing the 3-parameter logistic model. In order to generate different estimation errors for the item parameters, the calibration sample size was manipulated (N = 500, 1000 and 2000 subjects) as was the ratio of item bank size to test length (banks of 197 and 788 items, test lengths of 20 and 40 items), both in a CAT and in a random test. Results show that capitalization on chance is particularly serious in CAT, as revealed by the large positive bias found in the small sample calibration conditions. For broad ranges of theta, the overestimation of the precision (asymptotic Se) reaches levels of 40%, something that does not occur with the RMSE (theta). The problem is greater as the item bank size to test length ratio increases. Potential solutions were tested in a second study, where two exposure control methods were incorporated into the item selection algorithm. Some alternative solutions are discussed.

  7. Genetic mapping of fixed phenotypes: disease frequency as a breed characteristic.

    PubMed

    Chase, Kevin; Jones, Paul; Martin, Alan; Ostrander, Elaine A; Lark, Karl G

    2009-01-01

    Traits that have been stringently selected to conform to specific criteria in a closed population are phenotypic stereotypes. In dogs, Canis familiaris, such stereotypes have been produced by breeding for conformation, performance (behaviors), etc. We measured phenotypes on a representative sample to establish breed stereotypes. DNA samples from 147 dog breeds were used to characterize single nucleotide polymorphism allele frequencies for association mapping of breed stereotypes. We identified significant size loci (quantitative trait loci [QTLs]), implicating candidate genes appropriate to regulation of size (e.g., IGF1, IGF2BP2 SMAD2, etc.). Analysis of other morphological stereotypes, also under extreme selection, identified many additional significant loci. Behavioral loci for herding, pointing, and boldness implicated candidate genes appropriate to behavior (e.g., MC2R, DRD1, and PCDH9). Significant loci for longevity, a breed characteristic inversely correlated with breed size, were identified. The power of this approach to identify loci regulating the incidence of specific polygenic diseases is demonstrated by the association of a specific IGF1 haplotype with hip dysplasia, patella luxation, and pancreatitis.

  8. Operationalizing hippocampal volume as an enrichment biomarker for amnestic mild cognitive impairment trials: effect of algorithm, test-retest variability, and cut point on trial cost, duration, and sample size.

    PubMed

    Yu, Peng; Sun, Jia; Wolz, Robin; Stephenson, Diane; Brewer, James; Fox, Nick C; Cole, Patricia E; Jack, Clifford R; Hill, Derek L G; Schwarz, Adam J

    2014-04-01

    The objective of this study was to evaluate the effect of computational algorithm, measurement variability, and cut point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). We used normal control and amnestic MCI subjects from the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) as normative reference and screening cohorts. We evaluated the enrichment performance of 4 widely used hippocampal segmentation algorithms (FreeSurfer, Hippocampus Multi-Atlas Propagation and Segmentation (HMAPS), Learning Embeddings Atlas Propagation (LEAP), and NeuroQuant) in terms of 2-year changes in Mini-Mental State Examination (MMSE), Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Sum of Boxes (CDR-SB). We modeled the implications for sample size, screen fail rates, and trial cost and duration. HCV based patient selection yielded reduced sample sizes (by ∼40%-60%) and lower trial costs (by ∼30%-40%) across a wide range of cut points. These results provide a guide to the choice of HCV cut point for amnestic MCI clinical trials, allowing an informed tradeoff between statistical and practical considerations. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Species selective preconcentration and quantification of gold nanoparticles using cloud point extraction and electrothermal atomic absorption spectrometry.

    PubMed

    Hartmann, Georg; Schuster, Michael

    2013-01-25

    The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 ± 0.06 (particle size 2 nm) to 0.52 ± 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L(-1) is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L(-1). The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L(-1) is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. An intercomparison of the taxonomic and size composition of tropical macrozooplankton and micronekton collected using three sampling gears

    NASA Astrophysics Data System (ADS)

    Kwong, Lian E.; Pakhomov, Evgeny A.; Suntsov, Andrey V.; Seki, Michael P.; Brodeur, Richard D.; Pakhomova, Larisa G.; Domokos, Réka

    2018-05-01

    A micronekton intercalibration experiment was conducted off the southwest coast of Oahu Island, Hawaii in October 2004. Day and night samples were collected in the epipelagic and mesopelagic zones using three micronekton sampling gears: the Cobb Trawl, the Isaacs-Kidd Midwater Trawl (IKMT), and the Hokkaido University Frame Trawl (HUFT). Taxonomic composition and contribution by main size groups to total catch varied among gear types. However, the three gears exhibited similar taxonomic composition for macrozooplankton and micronekton ranging from 20 to 100 mm length (MM20-100). The HUFT and IKMT captured more mesozooplankton and small MM20-100, while the Cobb trawl selected towards larger MM20-100 and nekton. Taxonomic composition was described and inter-compared among gears. The relative efficacy of the three gears was assessed, and size dependent intercalibration coefficients were developed for MM20-100.

  11. CO2 hydrogenation to methanol on supported Au catalysts under moderate reaction conditions: support and particle size effects.

    PubMed

    Hartadi, Yeusy; Widmann, Daniel; Behm, R Jürgen

    2015-02-01

    The potential of metal oxide supported Au catalysts for the formation of methanol from CO2 and H2 under conditions favorable for decentralized and local conversion, which could be concepts for chemical energy storage, was investigated. Significant differences in the catalytic activity and selectivity of Au/Al2 O3 , Au/TiO2 , AuZnO, and Au/ZrO2 catalysts for methanol formation under moderate reaction conditions at a pressure of 5 bar and temperatures between 220 and 240 °C demonstrate pronounced support effects. A high selectivity (>50 %) for methanol formation was obtained only for Au/ZnO. Furthermore, measurements on Au/ZnO samples with different Au particle sizes reveal distinct Au particle size effects: although the activity increases strongly with the decreasing particle size, the selectivity decreases. The consequences of these findings for the reaction mechanism and for the potential of Au/ZnO catalysts for chemical energy storage and a "green" methanol technology are discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Microcephaly genes evolved adaptively throughout the evolution of eutherian mammals

    PubMed Central

    2014-01-01

    Background Genes associated with the neurodevelopmental disorder microcephaly display a strong signature of adaptive evolution in primates. Comparative data suggest a link between selection on some of these loci and the evolution of primate brain size. Whether or not either positive selection or this phenotypic association are unique to primates is unclear, but recent studies in cetaceans suggest at least two microcephaly genes evolved adaptively in other large brained mammalian clades. Results Here we analyse the evolution of seven microcephaly loci, including three recently identified loci, across 33 eutherian mammals. We find extensive evidence for positive selection having acted on the majority of these loci not just in primates but also across non-primate mammals. Furthermore, the patterns of selection in major mammalian clades are not significantly different. Using phylogenetically corrected comparative analyses, we find that the evolution of two microcephaly loci, ASPM and CDK5RAP2, are correlated with neonatal brain size in Glires and Euungulata, the two most densely sampled non-primate clades. Conclusions Together with previous results, this suggests that ASPM and CDK5RAP2 may have had a consistent role in the evolution of brain size in mammals. Nevertheless, several limitations of currently available data and gene-phenotype tests are discussed, including sparse sampling across large evolutionary distances, averaging gene-wide rates of evolution, potential phenotypic variation and evolutionary reversals. We discuss the implications of our results for studies of the genetic basis of brain evolution, and explicit tests of gene-phenotype hypotheses. PMID:24898820

  13. Selection of forest canopy gaps by male Cerulean Warblers in West Virginia

    USGS Publications Warehouse

    Perkins, Kelly A.; Wood, Petra Bohall

    2014-01-01

    Forest openings, or canopy gaps, are an important resource for many forest songbirds, such as Cerulean Warblers (Setophaga cerulea). We examined canopy gap selection by this declining species to determine if male Cerulean Warblers selected particular sizes, vegetative heights, or types of gaps. We tested whether these parameters differed among territories, territory core areas, and randomly-placed sample plots. We used enhanced territory mapping techniques (burst sampling) to define habitat use within the territory. Canopy gap densities were higher within core areas of territories than within territories or random plots, indicating that Cerulean Warblers selected habitat within their territories with the highest gap densities. Selection of regenerating gaps with woody vegetation >12 m within the gap, and canopy heights >24 m surrounding the gap, occurred within territory core areas. These findings differed between two sites indicating that gap selection may vary based on forest structure. Differences were also found regarding the placement of territories with respect to gaps. Larger gaps, such as wildlife food plots, were located on the periphery of territories more often than other types and sizes of gaps, while smaller gaps, such as treefalls, were located within territory boundaries more often than expected. The creations of smaller canopy gaps, <100 m2, within dense stands are likely compatible with forest management for this species.

  14. Collective feature selection to identify crucial epistatic variants.

    PubMed

    Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D

    2018-01-01

    Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.

  15. QA/QC requirements for physical properties sampling and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Innis, B.E.

    1993-07-21

    This report presents results of an assessment of the available information concerning US Environmental Protection Agency (EPA) quality assurance/quality control (QA/QC) requirements and guidance applicable to sampling, handling, and analyzing physical parameter samples at Comprehensive Environmental Restoration, Compensation, and Liability Act (CERCLA) investigation sites. Geotechnical testing laboratories measure the following physical properties of soil and sediment samples collected during CERCLA remedial investigations (RI) at the Hanford Site: moisture content, grain size by sieve, grain size by hydrometer, specific gravity, bulk density/porosity, saturated hydraulic conductivity, moisture retention, unsaturated hydraulic conductivity, and permeability of rocks by flowing air. Geotechnical testing laboratories alsomore » measure the following chemical parameters of soil and sediment samples collected during Hanford Site CERCLA RI: calcium carbonate and saturated column leach testing. Physical parameter data are used for (1) characterization of vadose and saturated zone geology and hydrogeology, (2) selection of monitoring well screen sizes, (3) to support modeling and analysis of the vadose and saturated zones, and (4) for engineering design. The objectives of this report are to determine the QA/QC levels accepted in the EPA Region 10 for the sampling, handling, and analysis of soil samples for physical parameters during CERCLA RI.« less

  16. Bed-material characteristics of the Sacramento–San Joaquin Delta, California, 2010–13

    USGS Publications Warehouse

    Marineau, Mathieu D.; Wright, Scott A.

    2017-02-10

    The characteristics of bed material at selected sites within the Sacramento–San Joaquin Delta, California, during 2010–13 are described in a study conducted by the U.S. Geological Survey in cooperation with the Bureau of Reclamation. During 2010‒13, six complete sets of samples were collected. Samples were initially collected at 30 sites; however, starting in 2012, samples were collected at 7 additional sites. These sites are generally collocated with an active streamgage. At all but one site, a separate bed-material sample was collected at three locations within the channel (left, right, and center). Bed-material samples were collected using either a US BMH–60 or a US BM–54 (for sites with higher stream velocity) cable-suspended, scoop sampler. Samples from each location were oven-dried and sieved. Bed material finer than 2 millimeters was subsampled using a sieving riffler and processed using a Beckman Coulter LS 13–320 laser diffraction particle-size analyzer. To determine the organic content of the bed material, the loss on ignition method was used for one subsample from each location. Particle-size distributions are presented as cumulative percent finer than a given size. Median and 90th-percentile particle size, and the percentage of subsample mass lost using the loss on ignition method for each sample are also presented in this report.

  17. Fossil shrews from Honduras and their significance for late glacial evolution in body size (Mammalia: Soricidae: Cryptotis)

    USGS Publications Warehouse

    Woodman, N.; Croft, D.A.

    2005-01-01

    Our study of mammalian remains excavated in the 1940s from McGrew Cave, north of Copan, Honduras, yielded an assemblage of 29 taxa that probably accumulated predominantly as the result of predation by owls. Among the taxa present are three species of small-eared shrews, genus Cryptotis. One species, Cryptotis merriami, is relatively rare among the fossil remains. The other two shrews, Cryptotis goodwini and Cryptotis orophila, are abundant and exhibit morpho metrical variation distinguishing them from modern populations. Fossils of C. goodwini are distinctly and consistently smaller than modern members of the species. To quantify the size differences, we derived common measures of body size for fossil C. goodwini using regression models based on modern samples of shrews in the Cryptotis mexicana-group. Estimated mean length of head and body for the fossil sample is 72-79 mm, and estimated mean mass is 7.6-9.6 g. These numbers indicate that the fossil sample averaged 6-14% smaller in head and body length and 39-52% less in mass than the modern sample and that increases of 6-17% in head and body length and 65-108% in mass occurred to achieve the mean body size of the modern sample. Conservative estimates of fresh (wet) food intake based on mass indicate that such a size increase would require a 37-58% increase in daily food consumption. In contrast to C. goodwini, fossil C. orophila from the cave is not different in mean body size from modern samples. The fossil sample does, however, show slightly greater variation in size than is currently present throughout the modern geographical distribution of the taxon. Moreover, variation in some other dental and mandibular characters is more constrained, exhibiting a more direct relationship to overall size. Our study of these species indicates that North American shrews have not all been static in size through time, as suggested by some previous work with fossil soricids. Lack of stratigraphic control within the site and our failure to obtain reliable radiometric dates on remains restrict our opportunities to place the site in a firm temporal context. However, the morphometrical differences we document for fossil C. orophila and C. goodwini show them to be distinct from modern populations of these shrews. Some other species of fossil mammals from McGrew Cave exhibit distinct size changes of the magnitudes experienced by many northern North American and some Mexican mammals during the transition from late glacial to Holocene environmental conditions, and it is likely that at least some of the remains from the cave are late Pleistocene in age. One curious factor is that, whereas most mainland mammals that exhibit large-scale size shifts during the late glacial/postglacial transition experienced dwarfing, C. goodwini increased in size. The lack of clinal variation in modern C. goodwini supports the hypothesis that size evolution can result from local selection rather than from cline translocation. Models of size change in mammals indicate that increased size, such as that observed for C. goodwini, are a likely consequence of increased availability of resources and, thereby, a relaxation of selection during critical times of the year.

  18. Improving the Selection, Classification, and Utilization of Army Enlisted Personnel. Project A: Research Plan

    DTIC Science & Technology

    1983-05-01

    occur. 4) It is also true that during a given time period, at a given base, not all of the people in the sample will actually be available for testing...taken sample sizes into consideration, we currently estimate that with few exceptions, we will have adequate samples to perform the analysis of simple ...aalanced Half Sample Repli- cations (BHSA). His analyses of simple cases have shown that this method is substantially more efficient than the

  19. Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference

    PubMed Central

    Shringarpure, Suyash; Xing, Eric P.

    2014-01-01

    Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351

  20. An automatic optimum kernel-size selection technique for edge enhancement

    USGS Publications Warehouse

    Chavez, Pat S.; Bauer, Brian P.

    1982-01-01

    Edge enhancement is a technique that can be considered, to a first order, a correction for the modulation transfer function of an imaging system. Digital imaging systems sample a continuous function at discrete intervals so that high-frequency information cannot be recorded at the same precision as lower frequency data. Because of this, fine detail or edge information in digital images is lost. Spatial filtering techniques can be used to enhance the fine detail information that does exist in the digital image, but the filter size is dependent on the type of area being processed. A technique has been developed by the authors that uses the horizontal first difference to automatically select the optimum kernel-size that should be used to enhance the edges that are contained in the image. 

  1. Estimation of the bottleneck size in Florida panthers

    USGS Publications Warehouse

    Culver, M.; Hedrick, P.W.; Murphy, K.; O'Brien, S.; Hornocker, M.G.

    2008-01-01

    We have estimated the extent of genetic variation in museum (1890s) and contemporary (1980s) samples of Florida panthers Puma concolor coryi for both nuclear loci and mtDNA. The microsatellite heterozygosity in the contemporary sample was only 0.325 that in the museum samples although our sample size and number of loci are limited. Support for this estimate is provided by a sample of 84 microsatellite loci in contemporary Florida panthers and Idaho pumas Puma concolor hippolestes in which the contemporary Florida panther sample had only 0.442 the heterozygosity of Idaho pumas. The estimated diversities in mtDNA in the museum and contemporary samples were 0.600 and 0.000, respectively. Using a population genetics approach, we have estimated that to reduce either the microsatellite heterozygosity or the mtDNA diversity this much (in a period of c. 80years during the 20th century when the numbers were thought to be low) that a very small bottleneck size of c. 2 for several generations and a small effective population size in other generations is necessary. Using demographic data from Yellowstone pumas, we estimated the ratio of effective to census population size to be 0.315. Using this ratio, the census population size in the Florida panthers necessary to explain the loss of microsatellite variation was c .41 for the non-bottleneck generations and 6.2 for the two bottleneck generations. These low bottleneck population sizes and the concomitant reduced effectiveness of selection are probably responsible for the high frequency of several detrimental traits in Florida panthers, namely undescended testicles and poor sperm quality. The recent intensive monitoring both before and after the introduction of Texas pumas in 1995 will make the recovery and genetic restoration of Florida panthers a classic study of an endangered species. Our estimates of the bottleneck size responsible for the loss of genetic variation in the Florida panther completes an unknown aspect of this account. ?? 2008 The Authors. Journal compilation ?? 2008 The Zoological Society of London.

  2. Dry heat effects on survival of indigenous soil particle microflora and particle viability studies of Kennedy Space Center soil

    NASA Technical Reports Server (NTRS)

    Ruschmeyer, O. R.; Pflug, I. J.; Gove, R.; Heisserer, Y.

    1975-01-01

    Research efforts were concentrated on attempts to obtain data concerning the dry heat resistance of particle microflora in Kennedy Space Center soil samples. The in situ dry heat resistance profiles at selected temperatures for the aggregate microflora on soil particles of certain size ranges were determined. Viability profiles of older soil samples were compared with more recently stored soil samples. The effect of increased particle numbers on viability profiles after dry heat treatment was investigated. These soil particle viability data for various temperatures and times provide information on the soil microflora response to heat treatment and are useful in making selections for spacecraft sterilization cycles.

  3. Observational studies of patients in the emergency department: a comparison of 4 sampling methods.

    PubMed

    Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R

    2012-08-01

    We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.

  4. Skills Acquisition in Plantain Flour Processing Enterprises: A Validation of Training Modules for Senior Secondary Schools

    ERIC Educational Resources Information Center

    Udofia, Nsikak-Abasi; Nlebem, Bernard S.

    2013-01-01

    This study was to validate training modules that can help provide requisite skills for Senior Secondary school students in plantain flour processing enterprises for self-employment and to enable them pass their examination. The study covered Rivers State. Purposive sampling technique was used to select a sample size of 205. Two sets of structured…

  5. Differential Item Functioning for Accommodated Students with Disabilities: Effect of Differences in Proficiency Distributions

    ERIC Educational Resources Information Center

    Quesen, Sarah

    2016-01-01

    When studying differential item functioning (DIF) with students with disabilities (SWD) focal groups typically suffer from small sample size, whereas the reference group population is usually large. This makes it possible for a researcher to select a sample from the reference population to be similar to the focal group on the ability scale. Doing…

  6. Reduction of Racial Disparities in Prostate Cancer

    DTIC Science & Technology

    2005-12-01

    erectile dysfunction , and female sexual dysfunction ). Wherever possible, the questions and scales employed on BACH were selected from published...Methods. A racially and ethnically diverse community-based survey of adults aged 30-79 years in Boston, Massachusetts. The BACH survey has...recruited adults in three racial/ethnic groups: Latino, African American, and White using a stratified cluster sample. The target sample size is equally

  7. Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.

    Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less

  8. Sexual selection and allometry: a critical reappraisal of the evidence and ideas.

    PubMed

    Bonduriansky, Russell

    2007-04-01

    One of the most pervasive ideas in the sexual selection literature is the belief that sexually selected traits almost universally exhibit positive static allometries (i.e., within a sample of conspecific adults, larger individuals have disproportionally larger traits). In this review, I show that this idea is contradicted by empirical evidence and theory. Although positive allometry is a typical attribute of some sexual traits in certain groups, the preponderance of positively allometric sexual traits in the empirical literature apparently results from a sampling bias reflecting a fascination with unusually exaggerated (bizarre) traits. I review empirical examples from a broad range of taxa illustrating the diversity of allometric patterns exhibited by signal, weapon, clasping and genital traits, as well as nonsexual traits. This evidence suggests that positive allometry may be the exception rather than the rule in sexual traits, that directional sexual selection does not necessarily lead to the evolution of positive allometry and, conversely, that positive allometry is not necessarily a consequence of sexual selection, and that many sexual traits exhibit sex differences in allometric intercept rather than slope. Such diversity in the allometries of secondary sexual traits is to be expected, given that optimal allometry should reflect resource allocation trade-offs, and patterns of sexual and viability selection on both trait size and body size. An unbiased empirical assessment of the relation between sexual selection and allometry is an essential step towards an understanding of this diversity.

  9. Investigating the unification of LOFAR-detected powerful AGN in the Boötes field

    NASA Astrophysics Data System (ADS)

    Morabito, Leah K.; Williams, W. L.; Duncan, Kenneth J.; Röttgering, H. J. A.; Miley, George; Saxena, Aayush; Barthel, Peter; Best, P. N.; Bruggen, M.; Brunetti, G.; Chyży, K. T.; Engels, D.; Hardcastle, M. J.; Harwood, J. J.; Jarvis, Matt J.; Mahony, E. K.; Prandoni, I.; Shimwell, T. W.; Shulevski, A.; Tasse, C.

    2017-08-01

    Low radio frequency surveys are important for testing unified models of radio-loud quasars and radio galaxies. Intrinsically similar sources that are randomly oriented on the sky will have different projected linear sizes. Measuring the projected linear sizes of these sources provides an indication of their orientation. Steep-spectrum isotropic radio emission allows for orientation-free sample selection at low radio frequencies. We use a new radio survey of the Boötes field at 150 MHz made with the Low-Frequency Array (LOFAR) to select a sample of radio sources. We identify 60 radio sources with powers P > 1025.5 W Hz-1 at 150 MHz using cross-matched multiwavelength information from the AGN and Galaxy Evolution Survey, which provides spectroscopic redshifts and photometric identification of 16 quasars and 44 radio galaxies. When considering the radio spectral slope only, we find that radio sources with steep spectra have projected linear sizes that are on average 4.4 ± 1.4 larger than those with flat spectra. The projected linear sizes of radio galaxies are on average 3.1 ± 1.0 larger than those of quasars (2.0 ± 0.3 after correcting for redshift evolution). Combining these results with three previous surveys, we find that the projected linear sizes of radio galaxies and quasars depend on redshift but not on power. The projected linear size ratio does not correlate with either parameter. The LOFAR data are consistent within the uncertainties with theoretical predictions of the correlation between the quasar fraction and linear size ratio, based on an orientation-based unification scheme.

  10. Multi-locus analysis of genomic time series data from experimental evolution.

    PubMed

    Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S

    2015-04-01

    Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.

  11. FSR: feature set reduction for scalable and accurate multi-class cancer subtype classification based on copy number.

    PubMed

    Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam

    2012-01-15

    Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.

  12. How Many Fish Need to Be Measured to Effectively Evaluate Trawl Selectivity?

    PubMed Central

    Santos, Juan; Sala, Antonello

    2016-01-01

    The aim of this study was to provide practitioners working with trawl selectivity with general and easily understandable guidelines regarding the fish sampling effort necessary during sea trials. In particular, we focused on how many fish would need to be caught and length measured in a trawl haul in order to assess the selectivity parameters of the trawl at a designated uncertainty level. We also investigated the dependency of this uncertainty level on the experimental method used to collect data and on the potential effects of factors such as the size structure in the catch relative to the size selection of the gear. We based this study on simulated data created from two different fisheries: the Barents Sea cod (Gadus morhua) trawl fishery and the Mediterranean Sea multispecies trawl fishery represented by red mullet (Mullus barbatus). We used these two completely different fisheries to obtain results that can be used as general guidelines for other fisheries. We found that the uncertainty in the selection parameters decreased with increasing number of fish measured and that this relationship could be described by a power model. The sampling effort needed to achieve a specific uncertainty level for the selection parameters was always lower for the covered codend method compared to the paired-gear method. In many cases, the number of fish that would need to be measured to maintain a specific uncertainty level was around 10 times higher for the paired-gear method than for the covered codend method. The trends observed for the effect of sampling effort in the two fishery cases investigated were similar; therefore the guidelines presented herein should be applicable to other fisheries. PMID:27560696

  13. Beverage Cans Used for Sediment Collection.

    ERIC Educational Resources Information Center

    Studlick, Joseph R. J.; Trautman, Timothy A.

    1979-01-01

    Beverage cans are well suited for sediment collection and storage containers. Advantages include being free, readily available, and the correct size for many samples. Instruction for selection, preparation, and use of cans in sediment collection and storage is provided. (RE)

  14. Habitat selection by tundra swans on Northern Alaska breeding grounds

    USGS Publications Warehouse

    Earnst, Susan L.; Rothe, T.

    2004-01-01

    Habitat selection by the Tundra Swan (Cygnus columbianus columbianus) was evaluated on the Colville River Delta prior to oil field development (1982-1989). Tundra Swan territories comprised a lake, used for refuge and foraging, and terrestrial habitats and ponds near the lakea??s perimeter used for foraging and nesting. Tundra swan sightings from early and late summer aerial surveys were used to investigate habitat selection at the territory and within-territory scale. At the territory or lake scale, swan sightings/lake increased with lake size, and increased from discrete to tapped (i.e., connected to a river channel) to drained lakes within size categories. Overall, 49% of the variation in swan sightings/lake was explained by lake size and type, a size-x-type interaction term, and the proportion of lake perimeter comprised of Halophytic Ponds and Halophytic Wet Meadows. At the within-territory or within-lake scale, foraging swans significantly selected Halophytic Ponds, Halophytic Wet Meadows, and Fresh Ponds relative to Uplands; nesting swans significantly selected Halophytic Ponds and significantly avoided Fresh Wet Meadows relative to Uplands. Vegetation sampling indicated that sites used by Tundra Swans on river channels and tapped lakes were significantly more likely to have Sheathed Pondweed (Potamogeton vaginatus) than control sites. The three major components of Tundra Swan diet were Carex sedges, Sheathed Pondweed, and algae, together comprising 85% of identifiable plant fragments in feces.

  15. Development of size-selective sampling of Bacillus anthracis surrogate spores from simulated building air intake mixtures for analysis via laser-induced breakdown spectroscopy.

    PubMed

    Gibb-Snyder, Emily; Gullett, Brian; Ryan, Shawn; Oudejans, Lukas; Touati, Abderrahmane

    2006-08-01

    Size-selective sampling of Bacillus anthracis surrogate spores from realistic, common aerosol mixtures was developed for analysis by laser-induced breakdown spectroscopy (LIBS). A two-stage impactor was found to be the preferential sampling technique for LIBS analysis because it was able to concentrate the spores in the mixtures while decreasing the collection of potentially interfering aerosols. Three common spore/aerosol scenarios were evaluated, diesel truck exhaust (to simulate a truck running outside of a building air intake), urban outdoor aerosol (to simulate common building air), and finally a protein aerosol (to simulate either an agent mixture (ricin/anthrax) or a contaminated anthrax sample). Two statistical methods, linear correlation and principal component analysis, were assessed for differentiation of surrogate spore spectra from other common aerosols. Criteria for determining percentages of false positives and false negatives via correlation analysis were evaluated. A single laser shot analysis of approximately 4 percent of the spores in a mixture of 0.75 m(3) urban outdoor air doped with approximately 1.1 x 10(5) spores resulted in a 0.04 proportion of false negatives. For that same sample volume of urban air without spores, the proportion of false positives was 0.08.

  16. Photographic techniques for characterizing streambed particle sizes

    USGS Publications Warehouse

    Whitman, Matthew S.; Moran, Edward H.; Ourso, Robert T.

    2003-01-01

    We developed photographic techniques to characterize coarse (>2-mm) and fine (≤2-mm) streambed particle sizes in 12 streams in Anchorage, Alaska. Results were compared with current sampling techniques to assess which provided greater sampling efficiency and accuracy. The streams sampled were wadeable and contained gravel—cobble streambeds. Gradients ranged from about 5% at the upstream sites to about 0.25% at the downstream sites. Mean particle sizes and size-frequency distributions resulting from digitized photographs differed significantly from those resulting from Wolman pebble counts for five sites in the analysis. Wolman counts were biased toward selecting larger particles. Photographic analysis also yielded a greater number of measured particles (mean = 989) than did the Wolman counts (mean = 328). Stream embeddedness ratings assigned from field and photographic observations were significantly different at 5 of the 12 sites, although both types of ratings showed a positive relationship with digitized surface fines. Visual estimates of embeddedness and digitized surface fines may both be useful indicators of benthic conditions, but digitizing surface fines produces quantitative rather than qualitative data. Benefits of the photographic techniques include reduced field time, minimal streambed disturbance, convenience of postfield processing, easy sample archiving, and improved accuracy and replication potential.

  17. Molecular size-dependent abundance and composition of dissolved organic matter in river, lake and sea waters.

    PubMed

    Xu, Huacheng; Guo, Laodong

    2017-06-15

    Dissolved organic matter (DOM) is ubiquitous in natural waters. The ecological role and environmental fate of DOM are highly related to the chemical composition and size distribution. To evaluate size-dependent DOM quantity and quality, water samples were collected from river, lake, and coastal marine environments and size fractionated through a series of micro- and ultra-filtrations with different membranes having different pore-sizes/cutoffs, including 0.7, 0.4, and 0.2 μm and 100, 10, 3, and 1 kDa. Abundance of dissolved organic carbon, total carbohydrates, chromophoric and fluorescent components in the filtrates decreased consistently with decreasing filter/membrane cutoffs, but with a rapid decline when the filter cutoff reached 3 kDa, showing an evident size-dependent DOM abundance and composition. About 70% of carbohydrates and 90% of humic- and protein-like components were measured in the <3 kDa fraction in freshwater samples, but these percentages were higher in the seawater sample. Spectroscopic properties of DOM, such as specific ultraviolet absorbance, spectral slope, and biological and humification indices also varied significantly with membrane cutoffs. In addition, different ultrafiltration membranes with the same manufacture-rated cutoff also gave rise to different DOM retention efficiencies and thus different colloidal abundances and size spectra. Thus, the size-dependent DOM properties were related to both sample types and membranes used. Our results here provide not only baseline data for filter pore-size selection when exploring DOM ecological and environmental roles, but also new insights into better understanding the physical definition of DOM and its size continuum in quantity and quality in aquatic environments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Method for analyzing soil structure according to the size of structural elements

    NASA Astrophysics Data System (ADS)

    Wieland, Ralf; Rogasik, Helmut

    2015-02-01

    The soil structure in situ is the result of cropping history and soil development over time. It can be assessed by the size distribution of soil structural elements such as air-filled macro-pores, aggregates and stones, which are responsible for important water and solute transport processes, gas exchange, and the stability of the soil against compacting and shearing forces exerted by agricultural machinery. A method was developed to detect structural elements of the soil in selected horizontal slices of soil core samples with different soil structures in order for them to be implemented accordingly. In the second step, a fitting tool (Eureqa) based on artificial programming was used to find a general function to describe ordered sets of detected structural elements. It was shown that all the samples obey a hyperbolic function: Y(k) = A /(B + k) , k ∈ { 0 , 1 , 2 , … }. This general behavior can be used to develop a classification method based on parameters {A and B}. An open source software program in Python was developed, which can be downloaded together with a selection of soil samples.

  19. A contemporary decennial global sample of changing agricultural field sizes

    NASA Astrophysics Data System (ADS)

    White, E.; Roy, D. P.

    2011-12-01

    In the last several hundred years agriculture has caused significant human induced Land Cover Land Use Change (LCLUC) with dramatic cropland expansion and a marked increase in agricultural productivity. The size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLUC. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, diffusion of disease pathogens and pests, and loss or degradation in buffers to nutrient, herbicide and pesticide flows. In this study, globally distributed locations with significant contemporary field size change were selected guided by a global map of agricultural yield and literature review and were selected to be representative of different driving forces of field size change (associated with technological innovation, socio-economic conditions, government policy, historic patterns of land cover land use, and environmental setting). Seasonal Landsat data acquired on a decadal basis (for 1980, 1990, 2000 and 2010) were used to extract field boundaries and the temporal changes in field size quantified and their causes discussed.

  20. The Impact of Nutrition and Health Claims on Consumer Perceptions and Portion Size Selection: Results from a Nationally Representative Survey.

    PubMed

    Benson, Tony; Lavelle, Fiona; Bucher, Tamara; McCloat, Amanda; Mooney, Elaine; Egan, Bernadette; Collins, Clare E; Dean, Moira

    2018-05-22

    Nutrition and health claims on foods can help consumers make healthier food choices. However, claims may have a 'halo' effect, influencing consumer perceptions of foods and increasing consumption. Evidence for these effects are typically demonstrated in experiments with small samples, limiting generalisability. The current study aimed to overcome this limitation through the use of a nationally representative survey. In a cross-sectional survey of 1039 adults across the island of Ireland, respondents were presented with three different claims (nutrition claim = "Low in fat"; health claim = "With plant sterols. Proven to lower cholesterol"; satiety claim = "Fuller for longer") on four different foods (cereal, soup, lasagne, and yoghurt). Participants answered questions on perceived healthiness, tastiness, and fillingness of the products with different claims and also selected a portion size they would consume. Claims influenced fillingness perceptions of some of the foods. However, there was little influence of claims on tastiness or healthiness perceptions or the portion size selected. Psychological factors such as consumers' familiarity with foods carrying claims and belief in the claims were the most consistent predictors of perceptions and portion size selection. Future research should identify additional consumer factors that may moderate the relationships between claims, perceptions, and consumption.

  1. The Impact of Nutrition and Health Claims on Consumer Perceptions and Portion Size Selection: Results from a Nationally Representative Survey

    PubMed Central

    Benson, Tony; Lavelle, Fiona; McCloat, Amanda; Mooney, Elaine; Egan, Bernadette; Collins, Clare E.; Dean, Moira

    2018-01-01

    Nutrition and health claims on foods can help consumers make healthier food choices. However, claims may have a ‘halo’ effect, influencing consumer perceptions of foods and increasing consumption. Evidence for these effects are typically demonstrated in experiments with small samples, limiting generalisability. The current study aimed to overcome this limitation through the use of a nationally representative survey. In a cross-sectional survey of 1039 adults across the island of Ireland, respondents were presented with three different claims (nutrition claim = “Low in fat”; health claim = “With plant sterols. Proven to lower cholesterol”; satiety claim = “Fuller for longer”) on four different foods (cereal, soup, lasagne, and yoghurt). Participants answered questions on perceived healthiness, tastiness, and fillingness of the products with different claims and also selected a portion size they would consume. Claims influenced fillingness perceptions of some of the foods. However, there was little influence of claims on tastiness or healthiness perceptions or the portion size selected. Psychological factors such as consumers’ familiarity with foods carrying claims and belief in the claims were the most consistent predictors of perceptions and portion size selection. Future research should identify additional consumer factors that may moderate the relationships between claims, perceptions, and consumption. PMID:29789472

  2. Repopulation of calibrations with samples from the target site: effect of the size of the calibration.

    NASA Astrophysics Data System (ADS)

    Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.

    2009-04-01

    Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".

  3. Sediment concentrations, loads, and particle-size distributions in the Red River of the North and selected tributaries near Fargo, North Dakota, during the 2011 spring high-flow event

    USGS Publications Warehouse

    Galloway, Joel M.; Blanchard, Robert A.; Ellison, Christopher A.

    2011-01-01

    Most of the bedload samples had particle sizes in the 0.5 to 1 millimeter and 0.25 to 0.5 millimeter ranges from the Maple River, Wild Rice River, Rush River, Buffalo River, and Red River sites. The Rush and Lower Branch Rush Rivers also had a greater portion of larger particle sizes in the 1 to 2 millimeter range. The Sheyenne River sites had a greater portion of smaller particle sizes in the bedload in the 0.125 to 0.5 millimeter range compared to the other sites. The bed material in samples collected during the 2011 spring high-flow event demonstrated a wider distribution of particle sizes than were observed in the bedload; the coarsest material was found at the Red River near Christine and the Lower Branch Rush River and the finest material at the Sheyenne River sites.

  4. Speckle size in optical Fourier domain imaging

    NASA Astrophysics Data System (ADS)

    Lamouche, G.; Vergnole, S.; Bisaillon, C.-E.; Dufour, M.; Maciejko, R.; Monchalin, J.-P.

    2007-06-01

    As in conventional time-domain optical coherence tomography (OCT), speckle is inherent to any Optical Fourier Domain Imaging (OFDI) of biological tissue. OFDI is also known as swept-source OCT (SS-OCT). The axial speckle size is mainly determined by the OCT resolution length and the transverse speckle size by the focusing optics illuminating the sample. There is also a contribution from the sample related to the number of scatterers contained within the probed volume. In the OFDI data processing, there is some liberty in selecting the range of wavelengths used and this allows variation in the OCT resolution length. Consequently the probed volume can be varied. By performing measurements on an optical phantom with a controlled density of discrete scatterers and by changing the probed volume with different range of wavelengths in the OFDI data processing, there is an obvious change in the axial speckle size, but we show that there is also a less obvious variation in the transverse speckle size. This work contributes to a better understanding of speckle in OCT.

  5. [A comparison of convenience sampling and purposive sampling].

    PubMed

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  6. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  7. Surface-sediment grain-size distribution and sediment transport in the subaqueous Mekong Delta, Vietnam

    NASA Astrophysics Data System (ADS)

    Nguyen, T. T.; Stattegger, K.; Nittrouer, C.; Phung, P. V.; Liu, P.; DeMaster, D. J.; Bui, D. V.; Le, A. D.; Nguyen, T. N.

    2016-02-01

    Collected surface-sediment samples in coastal water around Mekong Delta (from distributary channels to Ca Mau Peninsula) were analyzed to determine surface-sediment grain-size distribution and sediment-transport trend in the subaqueous Mekong Delta. The grain-size data set of 238 samples was obtained by using the laser instrument Mastersizer 2000 and LS Particle Size Analyzer. Fourteen samples were selected for geochemical analysis (total-organic and carbonate content). These geochemical results were used to assist in interpreting variations of granulometricparamenters along the cross-shore transects. Nine transects were examined from CungHau river mouth to Ca Mau Peninsula and six thematic maps on the whole study area were made. The research results indicate that: (1) generally, the sediment becomes finer from the delta front downwards to prodelta and becomes coarser again and poorer sorted on the adjacent inner shelf due to different sources of sediment; (2) sediment-granulometry parameters vary among sedimentary sub-environments of the underwater part of Mekong Delta, the distance from sediment source and hydrodynamic regime controlling each region; (3) the net sediment transport is southwest toward the Ca Mau Peninsula.

  8. Advanced functional materials in solid phase extraction for ICP-MS determination of trace elements and their species - A review.

    PubMed

    He, Man; Huang, Lijin; Zhao, Bingshan; Chen, Beibei; Hu, Bin

    2017-06-22

    For the determination of trace elements and their species in various real samples by inductively coupled plasma mass spectrometry (ICP-MS), solid phase extraction (SPE) is a commonly used sample pretreatment technique to remove complex matrix, pre-concentrate target analytes and make the samples suitable for subsequent sample introduction and measurements. The sensitivity, selectivity/anti-interference ability, sample throughput and application potential of the methodology of SPE-ICP-MS are greatly dependent on SPE adsorbents. This article presents a general overview of the use of advanced functional materials (AFMs) in SPE for ICP-MS determination of trace elements and their species in the past decade. Herein the AFMs refer to the materials featuring with high adsorption capacity, good selectivity, fast adsorption/desorption dynamics and satisfying special requirements in real sample analysis, including nanometer-sized materials, porous materials, ion imprinting polymers, restricted access materials and magnetic materials. Carbon/silica/metal/metal oxide nanometer-sized adsorbents with high surface area and plenty of adsorption sites exhibit high adsorption capacity, and porous adsorbents would provide more adsorption sites and faster adsorption dynamics. The selectivity of the materials for target elements/species can be improved by using physical/chemical modification, ion imprinting and restricted accessed technique. Magnetic adsorbents in conventional batch operation offer unique magnetic response and high surface area-volume ratio which provide a very easy phase separation, greater extraction capacity and efficiency over conventional adsorbents, and chip-based magnetic SPE provides a versatile platform for special requirement (e.g. cell analysis). The performance of these adsorbents for the determination of trace elements and their species in different matrices by ICP-MS is discussed in detail, along with perspectives and possible challenges in the future development. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Mating flights select for symmetry in honeybee drones ( Apis mellifera)

    NASA Astrophysics Data System (ADS)

    Jaffé, Rodolfo; Moritz, Robin F. A.

    2010-03-01

    Males of the honeybee ( Apis mellifera) fly to specific drone congregation areas (DCAs), which virgin queens visit in order to mate. From the thousands of drones that are reared in a single colony, only very few succeed in copulating with a queen, and therefore, a strong selection is expected to act on adult drones during their mating flights. In consequence, the gathering of drones at DCAs may serve as an indirect mate selection mechanism, assuring that queens only mate with those individuals having a better flight ability and a higher responsiveness to the queen’s visual and chemical cues. Here, we tested this idea relying on wing fluctuating asymmetry (FA) as a measure of phenotypic quality. By recapturing marked drones at a natural DCA and comparing their size and FA with a control sample of drones collected at their maternal hives, we were able to detect any selection on wing size and wing FA occurring during the mating flights. Although we found no solid evidence for selection on wing size, wing FA was found to be significantly lower in the drones collected at the DCA than in those collected at the hives. Our results demonstrate the action of selection during drone mating flights for the first time, showing that developmental stability can influence the mating ability of honeybee drones. We therefore conclude that selection during honeybee drone mating flights may confer some fitness advantages to the queens.

  10. Mating flights select for symmetry in honeybee drones (Apis mellifera).

    PubMed

    Jaffé, Rodolfo; Moritz, Robin F A

    2010-03-01

    Males of the honeybee (Apis mellifera) fly to specific drone congregation areas (DCAs), which virgin queens visit in order to mate. From the thousands of drones that are reared in a single colony, only very few succeed in copulating with a queen, and therefore, a strong selection is expected to act on adult drones during their mating flights. In consequence, the gathering of drones at DCAs may serve as an indirect mate selection mechanism, assuring that queens only mate with those individuals having a better flight ability and a higher responsiveness to the queen's visual and chemical cues. Here, we tested this idea relying on wing fluctuating asymmetry (FA) as a measure of phenotypic quality. By recapturing marked drones at a natural DCA and comparing their size and FA with a control sample of drones collected at their maternal hives, we were able to detect any selection on wing size and wing FA occurring during the mating flights. Although we found no solid evidence for selection on wing size, wing FA was found to be significantly lower in the drones collected at the DCA than in those collected at the hives. Our results demonstrate the action of selection during drone mating flights for the first time, showing that developmental stability can influence the mating ability of honeybee drones. We therefore conclude that selection during honeybee drone mating flights may confer some fitness advantages to the queens.

  11. Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation

    NASA Astrophysics Data System (ADS)

    Loktev, Mikhail; Soloviev, Oleg; Savenko, Svyatoslav; Vdovin, Gleb

    2011-07-01

    We report on the first results to our knowledge obtained with adaptable multiaperture imaging through turbulence on a horizontal atmospheric path. We show that the resolution can be improved by adaptively matching the size of the subaperture to the characteristic size of the turbulence. Further improvement is achieved by the deconvolution of a number of subimages registered simultaneously through multiple subapertures. Different implementations of multiaperture geometry, including pupil multiplication, pupil image sampling, and a plenoptic telescope, are considered. Resolution improvement has been demonstrated on a ˜550m horizontal turbulent path, using a combination of aperture sampling, speckle image processing, and, optionally, frame selection.

  12. Evaluating test-retest reliability in patient-reported outcome measures for older people: A systematic review.

    PubMed

    Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju

    2018-03-01

    This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in the studies that report the results of test-retest reliability. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. PRIMUS/NAVCARE Cost-Effectiveness Analysis

    DTIC Science & Technology

    1991-04-08

    ICD-9-CM diagnosis codes that occurred most frequently in the medical record sample - 328.9 ( otitis media , unspecified) and 465.9 (upper...when attention is focused upon a single diagnosis, the MTF CECs are no longer consistently above the PRIMUS CECs. For otitis media , the MTF CECs are...CHAMPUS-EQUIVALENT COSTS FOR SELECTED DIAGNOSES 328.9 OTITIS MEDIA , UNSPECIFIED Sample Size Mean 95% Confidence Interval Upper Limit Lower

  14. Resource Input, Service Process and Resident Activity Indicators in a Welsh National Random Sample of Staffed Housing Services for People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Felce, David; Perry, Jonathan

    2004-01-01

    Background: The aims were to: (i) explore the association between age and size of setting and staffing per resident; and (ii) report resident and setting characteristics, and indicators of service process and resident activity for a national random sample of staffed housing provision. Methods: Sixty settings were selected randomly from those…

  15. Factors Influencing Teachers' Competence in Developing Resilience in Vulnerable Children in Primary Schools in Uasin Gishu County, Kenya

    ERIC Educational Resources Information Center

    Silyvier, Tsindoli; Nyandusi, Charles

    2015-01-01

    The purpose of the study was to assess the effect of teacher characteristics on their competence in developing resilience in vulnerable primary school children. A descriptive survey research design was used. This study was based on resiliency theory as proposed by Krovetz (1998). Simple random sampling was used to select a sample size of 108…

  16. Lecturers and Postgraduates Perception of Libraries as Promoters of Teaching, Learning, and Research at the University of Ibadan, Nigeria

    ERIC Educational Resources Information Center

    Oyewole, Olawale; Adetimirin, Airen

    2015-01-01

    Lecturers and postgraduates are among the users of the university libraries and their perception of the libraries has influence on utilization of the information resources, hence the need for this study. Survey method was adopted for the study and simple random sampling method was used to select sample size of 38 lecturers and 233 postgraduates.…

  17. Student Assessment of Quality of Access at the National Open University of Nigeria (NOUN)

    ERIC Educational Resources Information Center

    Inegbedion, Juliet O.; Adu, Folorunso I.; Ofulue, Christine Y.

    2016-01-01

    This paper presents a study conducted by Inegbedion, Adu and Ofulue from the National Open University of Nigeria. The study focused on the quality of access (admission and registration) at NOUN from a student perspective. A survey design was used for the study while a multi-stage sampling technique was used to select the sample size. All the…

  18. Trends in Selecting Undergraduate Business Majors & International Enrollment & Expected Salaries

    ERIC Educational Resources Information Center

    Ozgur, Ceyhun; Li, Yang; Rogers, Grace

    2015-01-01

    The paper begins with a brief review of the literature and how business students choose their major in the U.S. and we list the most popular majors in the U.S. Universities. We also talk about the factors that influenced student's choice. In our next research project, we will not only use a larger sample size but also the sample will come from a…

  19. Particle size distribution of distillers dried grains with solubles (DDGS) and relationships to compositional and color properties.

    PubMed

    Liu, Keshun

    2008-11-01

    Eleven distillers dried grains with solubles (DDGS), processed from yellow corn, were collected from different ethanol processing plants in the US Midwest area. Particle size distribution (PSD) by mass of each sample was determined using a series of six selected US standard sieves: Nos. 8, 12, 18, 35, 60, and 100, and a pan. The original sample and sieve sized fractions were measured for surface color and contents of moisture, protein, oil, ash, and starch. Total carbohydrate (CHO) and total non-starch CHO were also calculated. Results show that there was a great variation in composition and color among DDGS from different plants. Surprisingly, a few DDGS samples contained unusually high amounts of residual starch (11.1-17.6%, dry matter basis, vs. about 5% of the rest), presumably resulting from modified processing methods. Particle size of DDGS varied greatly within a sample and PSD varied greatly among samples. The 11 samples had a mean value of 0.660mm for the geometric mean diameter (dgw) of particles and a mean value of 0.440mm for the geometric standard deviation (Sgw) of particle diameters by mass. The majority had a unimodal PSD, with a mode in the size class between 0.5 and 1.0mm. Although PSD and color parameters had little correlation with composition of whole DDGS samples, distribution of nutrients as well as color attributes correlated well with PSD. In sieved fractions, protein content, L and a color values negatively while contents of oil and total CHO positively correlated with particle size. It is highly feasible to fractionate DDGS for compositional enrichment based on particle size, while the extent of PSD can serve as an index for potential of DDGS fractionation. The above information should be a vital addition to quality and baseline data of DDGS.

  20. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    PubMed

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Species and size selectivity of two midwater trawls used in an acoustic survey of the Alaska Arctic

    NASA Astrophysics Data System (ADS)

    De Robertis, Alex; Taylor, Kevin; Williams, Kresimir; Wilson, Christopher D.

    2017-01-01

    Acoustic-trawl (AT) survey methods are widely used to estimate the abundance and distribution of pelagic organisms. This technique relies on estimates of size and species composition from trawl catches along with estimates of the acoustic properties of these animals to convert measurements of acoustic backscatter into animal abundance. However, trawls are selective samplers, and if the catch does not represent the size and species composition of the animals in the acoustic beam the resulting abundance estimates will be biased. We conducted an experiment to quantify trawl selectivity for species encountered during an AT survey of the Alaska Arctic. The pelagic assemblage in this environment was dominated by small young-of-the-year (age-0) fishes and jellyfish, which may be poorly retained in trawls. A large midwater trawl (Cantrawl) and a smaller midwater trawl (modified Marinovich) were used during the survey. The Marinovich was equipped with 8 small-mesh recapture nets which were used to estimate the probability that an individual that enters the trawl is retained. In addition, paired hauls were made with the Cantrawl and Marinovich to estimate the difference in selectivity between the two trawls. A statistical model was developed to combine the catches of the recapture nets and the paired hauls to estimate the length-dependent selectivity of the trawls for the most abundant species (e.g., age-0 fishes and jellyfish). The analysis indicated that there was substantial size and species selectivity: although the modified Marinovich generally had a higher catch per unit effort, many of the animals encountered in this environment were poorly retained by both trawls. The observed size and species selectivity of the trawls can be used to select appropriate nets for sampling pelagic fishes, and correct survey estimates for the biases introduced in the trawl capture process.

  2. Evidence of a chimpanzee-sized ancestor of humans but a gibbon-sized ancestor of apes.

    PubMed

    Grabowski, Mark; Jungers, William L

    2017-10-12

    Body mass directly affects how an animal relates to its environment and has a wide range of biological implications. However, little is known about the mass of the last common ancestor (LCA) of humans and chimpanzees, hominids (great apes and humans), or hominoids (all apes and humans), which is needed to evaluate numerous paleobiological hypotheses at and prior to the root of our lineage. Here we use phylogenetic comparative methods and data from primates including humans, fossil hominins, and a wide sample of fossil primates including Miocene apes from Africa, Europe, and Asia to test alternative hypotheses of body mass evolution. Our results suggest, contrary to previous suggestions, that the LCA of all hominoids lived in an environment that favored a gibbon-like size, but a series of selective regime shifts, possibly due to resource availability, led to a decrease and then increase in body mass in early hominins from a chimpanzee-sized LCA.The pattern of body size evolution in hominids can provide insight into historical human ecology. Here, Grabowski and Jungers use comparative phylogenetic analysis to reconstruct the likely size of the ancestor of humans and chimpanzees and the evolutionary history of selection on body size in primates.

  3. Analysis of hard coal quality for narrow size fraction under 20 mm

    NASA Astrophysics Data System (ADS)

    Niedoba, Tomasz; Pięta, Paulina

    2018-01-01

    The paper presents the results of an analysis of hard coal quality diversion in narrow size fraction by using taxonomic methods. Raw material samples were collected in selected mines of Upper Silesian Industrial Region and they were classified according to the Polish classification as types 31, 34.2 and 35. Then, each size fraction was characterized in terms of the following properties: density, ash content, calorific content, volatile content, total sulfur content and analytical moisture. As a result of the analysis it can be stated that the best quality in the entire range of the tested size fractions was the 34.2 coking coal type. At the same time, in terms of price parameters, high quality of raw material characterised the following size fractions: 0-6.3 mm of 31 energetic coal type and 0-3.15 mm of 35 coking coal type. The methods of grouping (Ward's method) and agglomeration (k-means method) have shown that the size fraction below 10 mm was characterized by higher quality in all the analyzed hard coal types. However, the selected taxonomic methods do not make it possible to identify individual size fraction or hard coal types based on chosen parameters.

  4. Re-evaluating the link between brain size and behavioural ecology in primates.

    PubMed

    Powell, Lauren E; Isler, Karin; Barton, Robert A

    2017-10-25

    Comparative studies have identified a wide range of behavioural and ecological correlates of relative brain size, with results differing between taxonomic groups, and even within them. In primates for example, recent studies contradict one another over whether social or ecological factors are critical. A basic assumption of such studies is that with sufficiently large samples and appropriate analysis, robust correlations indicative of selection pressures on cognition will emerge. We carried out a comprehensive re-examination of correlates of primate brain size using two large comparative datasets and phylogenetic comparative methods. We found evidence in both datasets for associations between brain size and ecological variables (home range size, diet and activity period), but little evidence for an effect of social group size, a correlation which has previously formed the empirical basis of the Social Brain Hypothesis. However, reflecting divergent results in the literature, our results exhibited instability across datasets, even when they were matched for species composition and predictor variables. We identify several potential empirical and theoretical difficulties underlying this instability and suggest that these issues raise doubts about inferring cognitive selection pressures from behavioural correlates of brain size. © 2017 The Author(s).

  5. CALIFA: a diameter-selected sample for an integral field spectroscopy galaxy survey

    NASA Astrophysics Data System (ADS)

    Walcher, C. J.; Wisotzki, L.; Bekeraité, S.; Husemann, B.; Iglesias-Páramo, J.; Backsmann, N.; Barrera Ballesteros, J.; Catalán-Torrecilla, C.; Cortijo, C.; del Olmo, A.; Garcia Lorenzo, B.; Falcón-Barroso, J.; Jilkova, L.; Kalinova, V.; Mast, D.; Marino, R. A.; Méndez-Abreu, J.; Pasquali, A.; Sánchez, S. F.; Trager, S.; Zibetti, S.; Aguerri, J. A. L.; Alves, J.; Bland-Hawthorn, J.; Boselli, A.; Castillo Morales, A.; Cid Fernandes, R.; Flores, H.; Galbany, L.; Gallazzi, A.; García-Benito, R.; Gil de Paz, A.; González-Delgado, R. M.; Jahnke, K.; Jungwiert, B.; Kehrig, C.; Lyubenova, M.; Márquez Perez, I.; Masegosa, J.; Monreal Ibero, A.; Pérez, E.; Quirrenbach, A.; Rosales-Ortega, F. F.; Roth, M. M.; Sanchez-Blazquez, P.; Spekkens, K.; Tundo, E.; van de Ven, G.; Verheijen, M. A. W.; Vilchez, J. V.; Ziegler, B.

    2014-09-01

    We describe and discuss the selection procedure and statistical properties of the galaxy sample used by the Calar Alto Legacy Integral Field Area (CALIFA) survey, a public legacy survey of 600 galaxies using integral field spectroscopy. The CALIFA "mother sample" was selected from the Sloan Digital Sky Survey (SDSS) DR7 photometric catalogue to include all galaxies with an r-band isophotal major axis between 45'' and 79.2'' and with a redshift 0.005 < z < 0.03. The mother sample contains 939 objects, 600 of which will be observed in the course of the CALIFA survey. The selection of targets for observations is based solely on visibility and thus keeps the statistical properties of the mother sample. By comparison with a large set of SDSS galaxies, we find that the CALIFA sample is representative of galaxies over a luminosity range of -19 > Mr > -23.1 and over a stellar mass range between 109.7 and 1011.4 M⊙. In particular, within these ranges, the diameter selection does not lead to any significant bias against - or in favour of - intrinsically large or small galaxies. Only below luminosities of Mr = -19 (or stellar masses <109.7 M⊙) is there a prevalence of galaxies with larger isophotal sizes, especially of nearly edge-on late-type galaxies, but such galaxies form <10% of the full sample. We estimate volume-corrected distribution functions in luminosities and sizes and show that these are statistically fully compatible with estimates from the full SDSS when accounting for large-scale structure. For full characterization of the sample, we also present a number of value-added quantities determined for the galaxies in the CALIFA sample. These include consistent multi-band photometry based on growth curve analyses; stellar masses; distances and quantities derived from these; morphological classifications; and an overview of available multi-wavelength photometric measurements. We also explore different ways of characterizing the environments of CALIFA galaxies, finding that the sample covers environmental conditions from the field to genuine clusters. We finally consider the expected incidence of active galactic nuclei among CALIFA galaxies given the existing pre-CALIFA data, finding that the final observed CALIFA sample will contain approximately 30 Sey2 galaxies. Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max Planck Institute for Astronomy and the Instituto de Astrofísica de Andalucía (CSIC). Publically released data products from CALIFA are made available on the webpage http://www.caha.es/CALIFA

  6. A comparison of small-area estimation techniques to estimate selected stand attributes using LiDAR-derived auxiliary variables

    Treesearch

    Michael E. Goerndt; Vicente J. Monleon; Hailemariam Temesgen

    2011-01-01

    One of the challenges often faced in forestry is the estimation of forest attributes for smaller areas of interest within a larger population. Small-area estimation (SAE) is a set of techniques well suited to estimation of forest attributes for small areas in which the existing sample size is small and auxiliary information is available. Selected SAE methods were...

  7. A Multi-Week Behavioral Sampling Tag for Sound Effects Studies: Design Trade-Offs and Prototype Evaluation

    DTIC Science & Technology

    2013-09-30

    performance of algorithms detecting dives, strokes , clicks, respiration and gait changes. (ii) Calibration errors: Size and power constraints in...acceptance parameters used to detect and classify events. For example, swim stroke detection requires parameters defining the minimum magnitude and the min...and max duration of a stroke . Species dependent parameters can be selected from existing DTAG data but other parameters depend on the size of the

  8. Evaluation of multiple-frequency, active and passive acoustics as surrogates for bedload transport

    USGS Publications Warehouse

    Wood, Molly S.; Fosness, Ryan L.; Pachman, Gregory; Lorang, Mark; Tonolla, Diego

    2015-01-01

    The use of multiple-frequency, active acoustics through deployment of acoustic Doppler current profilers (ADCPs) shows potential for estimating bedload in selected grain size categories. The U.S. Geological Survey (USGS), in cooperation with the University of Montana (UM), evaluated the use of multiple-frequency, active and passive acoustics as surrogates for bedload transport during a pilot study on the Kootenai River, Idaho, May 17-18, 2012. Four ADCPs with frequencies ranging from 600 to 2000 kHz were used to measure apparent moving bed velocities at 20 stations across the river in conjunction with physical bedload samples. Additionally, UM scientists measured the sound frequencies of moving particles with two hydrophones, considered passive acoustics, along longitudinal transects in the study reach. Some patterns emerged in the preliminary analysis which show promise for future studies. Statistically significant relations were successfully developed between apparent moving bed velocities measured by ADCPs with frequencies 1000 and 1200 kHz and bedload in 0.5 to 2.0 mm grain size categories. The 600 kHz ADCP seemed somewhat sensitive to the movement of gravel bedload in the size range 8.0 to 31.5 mm, but the relation was not statistically significant. The passive hydrophone surveys corroborated the sample results and could be used to map spatial variability in bedload transport and to select a measurement cross-section with moving bedload for active acoustic surveys and physical samples.

  9. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    PubMed

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. Galaxy evolution by color-log(n) type since redshift unity in the Hubble Ultra Deep Field

    NASA Astrophysics Data System (ADS)

    Cameron, E.; Driver, S. P.

    2009-01-01

    Aims: We explore the use of the color-log(n) (where n is the global Sérsic index) plane as a tool for subdividing the galaxy population in a physically-motivated manner out to redshift unity. We thereby aim to quantify surface brightness evolution by color-log(n) type, accounting separately for the specific selection and measurement biases against each. Methods: We construct (u-r) color-log(n) diagrams for distant galaxies in the Hubble Ultra Deep Field (UDF) within a series of volume-limited samples to z=1.5. The color-log(n) distributions of these high redshift galaxies are compared against that measured for nearby galaxies in the Millennium Galaxy Catalogue (MGC), as well as to the results of visual morphological classification. Based on this analysis we divide our sample into three color-structure classes. Namely, “red, compact”, “blue, diffuse” and “blue, compact”. Luminosity-size diagrams are constructed for members of the two largest classes (“red, compact” and “blue, diffuse”), both in the UDF and the MGC. Artificial galaxy simulations (for systems with exponential and de Vaucouleurs profile shapes alternately) are used to identify “bias-free” regions of the luminosity-size plane in which galaxies are detected with high completeness, and their fluxes and sizes recovered with minimal surface brightness-dependent biases. Galaxy evolution is quantified via comparison of the low and high redshift luminosity-size relations within these “bias-free” regions. Results: We confirm the correlation between color-log(n) plane position and visual morphological type observed locally and in other high redshift studies in the color and/or structure domain. The combined effects of observational uncertainties, the morphological K-correction and cosmic variance preclude a robust statistical comparison of the shape of the MGC and UDF color-log(n) distributions. However, in the interval 0.75 < z <1.0 where the UDF i-band samples close to rest-frame B-band light (i.e., the morphological K-correction between our samples is negligible) we are able to present tentative evidence of bimodality, albiet for a very small sample size (17 galaxies). Our unique approach to quantifying selection and measurement biases in the luminosity-size plane highlights the need to consider errors in the recovery of both magnitudes and sizes, and their dependence on profile shape. Motivated by these results we divide our sample into the three color-structure classes mentioned above and quantify luminosity-size evolution by galaxy type. Specifically, we detect decreases in B-band, surface brightness of 1.57 ± 0.22 mag arcsec-2 and 1.65 ± 0.22 mag arcsec-2 for our “blue, diffuse” and “red, compact” classes respectively between redshift unity and the present day.

  11. RESIDENTIAL INDOOR EXPOSURES OF CHILDREN TO PESTICIDES FOLLOWING LAWN APPLICATIONS

    EPA Science Inventory

    Methods have been developed to estimate children's residential exposures to pesticide residues and applied in a small field study of indoor exposures resulting from the intrusion of lawn-applied herbicide into the home. Sampling methods included size-selective indoor air sampli...

  12. Chemical analyses of micrometre-sized solids by a miniature laser ablation/ionisation mass spectrometer (LMS)

    NASA Astrophysics Data System (ADS)

    Tulej, Marek; Wiesendanger, Reto; Neuland, Maike; Meyer, Stefan; Wurz, Peter; Neubeck, Anna; Ivarsson, Magnus; Riedo, Valentine; Moreno-Garcia, Pavel; Riedo, Andreas; Knopp, Gregor

    2017-04-01

    Investigation of elemental and isotope compositions of planetary solids with high spatial resolution are of considerable interest to current space research. Planetary materials are typically highly heterogenous and such studies can deliver detailed chemical information of individual sample components with the sizes down to a few micrometres. The results of such investigations can yield mineralogical surface context including mineralogy of individual grains or the elemental composition of of other objects embedded in the sample surface such as micro-sized fossils. The identification of bio-relevant material can follow by the detection of bio-relevant elements and their isotope fractionation effects [1, 2]. For chemical analysis of heterogenous solid surfaces we have combined a miniature laser ablation mass spectrometer (LMS) (mass resolution (m/Dm) 400-600; dynamic range 105-108) with in situ microscope-camera system (spatial resolution ˜2um, depth 10 um). The microscope helps to find the micrometre-sized solids across the surface sample for the direct mass spectrometric analysis by the LMS instrument. The LMS instrument combines an fs-laser ion source and a miniature reflectron-type time-of-flight mass spectrometer. The mass spectrometric analysis of the selected on the sample surface objects followed after ablation, atomisation and ionisation of the sample by a focussed laser radiation (775 nm, 180 fs, 1 kHz; the spot size of ˜20 um) [4, 5, 6]. Mass spectra of almost all elements (isotopes) present in the investigated location are measured instantaneously. A number of heterogenous rock samples containing micrometre-sized fossils and mineralogical grains were investigated with high selectivity and sensitivity. Chemical analyses of filamentous structures observed in carbonate veins (in harzburgite) and amygdales in pillow basalt lava can be well characterised chemically yielding elemental and isotope composition of these objects [7, 8]. The investigation can be prepared with high selectivity since the host composition is typically readily different comparing to that of the analysed objects. In depth chemical analysis (chemical profiling) is found in particularly helpful allowing relatively easy isolation of the chemical composition of the host from the investigated objects [6]. Hence, both he chemical analysis of the environment and microstructures can be derived. Analysis of the isotope compositions can be measured with high level of confidence, nevertheless, presence of cluster of similar masses can make sometimes this analysis difficult. Based on this work, we are confident that similar studies can be conducted in situ planetary surfaces delivering important chemical context and evidences on bio-relevant processes. [1] Summons et al., Astrobiology, 11, 157, 2011. [2] Wurz et al., Sol. Sys. Res. 46, 408, 2012. [3] Riedo et al., J. Anal. Atom. Spectrom. 28, 1256, 2013. [4] Riedo et al., J. Mass Spectrom.48, 1, 2013. [5] Tulej et al., Geostand. Geoanal. Res., 38, 423, 2014. [6] Grimaudo et al., Anal. Chem. 87, 2041, 2015 [7] Tulej et al., Astrobiology, 15, 1, 2015. [8] Neubeck et al., Int. J. Astrobiology, 15, 133, 2016.

  13. Natural Selection in the Great Apes

    PubMed Central

    Cagan, Alexander; Theunert, Christoph; Laayouni, Hafid; Santpere, Gabriel; Pybus, Marc; Casals, Ferran; Prüfer, Kay; Navarro, Arcadi; Marques-Bonet, Tomas; Bertranpetit, Jaume; Andrés, Aida M.

    2016-01-01

    Natural selection is crucial for the adaptation of populations to their environments. Here, we present the first global study of natural selection in the Hominidae (humans and great apes) based on genome-wide information from population samples representing all extant species (including most subspecies). Combining several neutrality tests we create a multi-species map of signatures of natural selection covering all major types of natural selection. We find that the estimated efficiency of both purifying and positive selection varies between species and is significantly correlated with their long-term effective population size. Thus, even the modest differences in population size among the closely related Hominidae lineages have resulted in differences in their ability to remove deleterious alleles and to adapt to changing environments. Most signatures of balancing and positive selection are species-specific, with signatures of balancing selection more often being shared among species. We also identify loci with evidence of positive selection across several lineages. Notably, we detect signatures of positive selection in several genes related to brain function, anatomy, diet and immune processes. Our results contribute to a better understanding of human evolution by putting the evidence of natural selection in humans within its larger evolutionary context. The global map of natural selection in our closest living relatives is available as an interactive browser at http://tinyurl.com/nf8qmzh. PMID:27795229

  14. Socioeconomic Factors Influence Physical Activity and Sport in Quebec Schools.

    PubMed

    Morin, Pascale; Lebel, Alexandre; Robitaille, Éric; Bisset, Sherri

    2016-11-01

    School environments providing a wide selection of physical activities and sufficient facilities are both essential and formative to ensure young people adopt active lifestyles. We describe the association between school opportunities for physical activity and socioeconomic factors measured by low-income cutoff index, school size (number of students), and neighborhood population density. A cross-sectional survey using a 2-stage stratified sampling method built a representative sample of 143 French-speaking public schools in Quebec, Canada. Self-administered questionnaires collected data describing the physical activities offered and schools' sports facilities. Descriptive and bivariate analyses were performed separately for primary and secondary schools. In primary schools, school size was positively associated with more intramural and extracurricular activities, more diverse interior facilities, and activities promoting active transportation. Low-income primary schools were more likely to offer a single gym. Low-income secondary schools offered lower diversity of intramural activities and fewer exterior sporting facilities. High-income secondary schools with a large school size provided a greater number of opportunities, larger infrastructures, and a wider selection of physical activities than smaller low-income schools. Results reveal an overall positive association between school availability of physical and sport activity and socioeconomic factors. © 2016, American School Health Association.

  15. Size-selective mortality of steelhead during freshwater and marine life stages related to freshwater growth in the Skagit River, Washington

    USGS Publications Warehouse

    Thompson, Jamie N.; Beauchamp, David A.

    2014-01-01

    We evaluated freshwater growth and survival from juvenile (ages 0–3) to smolt (ages 1–5) and adult stages in wild steelhead Oncorhynchus mykiss sampled in different precipitation zones of the Skagit River basin, Washington. Our objectives were to determine whether significant size-selective mortality (SSM) in steelhead could be detected between early and later freshwater stages and between each of these freshwater stages and returning adults and, if so, how SSM varied between these life stages and mixed and snow precipitation zones. Scale-based size-at-annulus comparisons indicated that steelhead in the snow zone were significantly larger at annulus 1 than those in the mixed rain–snow zone. Size at annuli 2 and 3 did not differ between precipitation zones, and we found no precipitation zone × life stage interaction effect on size at annulus. Significant freshwater and marine SSM was evident between the juvenile and adult samples at annulus 1 and between each life stage at annuli 2 and 3. Rapid growth between the final freshwater annulus and the smolt migration did not improve survival to adulthood; rather, it appears that survival in the marine environment may be driven by an overall higher growth rate set earlier in life, which results in a larger size at smolt migration. Efforts for recovery of threatened Puget Sound steelhead could benefit by considering that SSM between freshwater and marine life stages can be partially attributed to growth attained in freshwater habitats and by identifying those factors that limit growth during early life stages.

  16. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    PubMed

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  17. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property

    PubMed Central

    Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen

    2010-01-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586

  18. Field substitution of nonresponders can maintain sample size and structure without altering survey estimates-the experience of the Italian behavioral risk factors surveillance system (PASSI).

    PubMed

    Baldissera, Sandro; Ferrante, Gianluigi; Quarchioni, Elisa; Minardi, Valentina; Possenti, Valentina; Carrozzi, Giuliano; Masocco, Maria; Salmaso, Stefania

    2014-04-01

    Field substitution of nonrespondents can be used to maintain the planned sample size and structure in surveys but may introduce additional bias. Sample weighting is suggested as the preferable alternative; however, limited empirical evidence exists comparing the two methods. We wanted to assess the impact of substitution on surveillance results using data from Progressi delle Aziende Sanitarie per la Salute in Italia-Progress by Local Health Units towards a Healthier Italy (PASSI). PASSI is conducted by Local Health Units (LHUs) through telephone interviews of stratified random samples of residents. Nonrespondents are replaced with substitutes randomly preselected in the same LHU stratum. We compared the weighted estimates obtained in the original PASSI sample (used as a reference) and in the substitutes' sample. The differences were evaluated using a Wald test. In 2011, 50,697 units were selected: 37,252 were from the original sample and 13,445 were substitutes; 37,162 persons were interviewed. The initially planned size and demographic composition were restored. No significant differences in the estimates between the original and the substitutes' sample were found. In our experience, field substitution is an acceptable method for dealing with nonresponse, maintaining the characteristics of the original sample without affecting the results. This evidence can support appropriate decisions about planning and implementing a surveillance system. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Recent Structural Evolution of Early-Type Galaxies: Size Growth from z = 1 to z = 0

    NASA Astrophysics Data System (ADS)

    van der Wel, Arjen; Holden, Bradford P.; Zirm, Andrew W.; Franx, Marijn; Rettura, Alessandro; Illingworth, Garth D.; Ford, Holland C.

    2008-11-01

    Strong size and internal density evolution of early-type galaxies between z ~ 2 and the present has been reported by several authors. Here we analyze samples of nearby and distant (z ~ 1) galaxies with dynamically measured masses in order to confirm the previous, model-dependent results and constrain the uncertainties that may play a role. Velocity dispersion (σ) measurements are taken from the literature for 50 morphologically selected 0.8 < z < 1.2 field and cluster early-type galaxies with typical masses Mdyn = 2 × 1011 M⊙. Sizes (Reff) are determined with Advanced Camera for Surveys imaging. We compare the distant sample with a large sample of nearby (0.04 < z < 0.08) early-type galaxies extracted from the Sloan Digital Sky Survey for which we determine sizes, masses, and densities in a consistent manner, using simulations to quantify systematic differences between the size measurements of nearby and distant galaxies. We find a highly significant difference between the σ - Reff distributions of the nearby and distant samples, regardless of sample selection effects. The implied evolution in Reff at fixed mass between z = 1 and the present is a factor of 1.97 +/- 0.15. This is in qualitative agreement with semianalytic models; however, the observed evolution is much faster than the predicted evolution. Our results reinforce and are quantitatively consistent with previous, photometric studies that found size evolution of up to a factor of 5 since z ~ 2. A combination of structural evolution of individual galaxies through the accretion of companions and the continuous formation of early-type galaxies through increasingly gas-poor mergers is one plausible explanation of the observations. Based on observations with the Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555, and observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under NASA contract 1407. Based on observations collected at the European Southern Observatory, Chile (169.A-0458). Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.

  20. Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification

    PubMed Central

    Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.

    2013-01-01

    Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761

  1. Computed Tomography to Estimate the Representative Elementary Area for Soil Porosity Measurements

    PubMed Central

    Borges, Jaqueline Aparecida Ribaski; Pires, Luiz Fernando; Belmont Pereira, André

    2012-01-01

    Computed tomography (CT) is a technique that provides images of different solid and porous materials. CT could be an ideal tool to study representative sizes of soil samples because of the noninvasive characteristic of this technique. The scrutiny of such representative elementary sizes (RESs) has been the target of attention of many researchers related to soil physics field owing to the strong relationship between physical properties and size of the soil sample. In the current work, data from gamma-ray CT were used to assess RES in measurements of soil porosity (ϕ). For statistical analysis, a study on the full width at a half maximum (FWHM) of the adjustment of distribution of ϕ at different areas (1.2 to 1162.8 mm2) selected inside of tomographic images was proposed herein. The results obtained point out that samples with a section area corresponding to at least 882.1 mm2 were the ones that provided representative values of ϕ for the studied Brazilian tropical soil. PMID:22666133

  2. Nutrition labeling and value size pricing at fast-food restaurants: a consumer perspective.

    PubMed

    O'Dougherty, Maureen; Harnack, Lisa J; French, Simone A; Story, Mary; Oakes, J Michael; Jeffery, Robert W

    2006-01-01

    This pilot study examined nutrition-related attitudes that may affect food choices at fast-food restaurants, including consumer attitudes toward nutrition labeling of fast foods and elimination of value size pricing. A convenience sample of 79 fast-food restaurant patrons aged 16 and above (78.5% white, 55% female, mean age 41.2 [17.1]) selected meals from fast-food restaurant menus that varied as to whether nutrition information was provided and value pricing included and completed a survey and interview on nutrition-related attitudes. Only 57.9% of participants rated nutrition as important when buying fast food. Almost two thirds (62%) supported a law requiring nutrition labeling on restaurant menus. One third (34%) supported a law requiring restaurants to offer lower prices on smaller instead of bigger-sized portions. This convenience sample of fast-food patrons supported nutrition labels on menus. More research is needed with larger samples on whether point-of-purchase nutrition labeling at fast-food restaurants raises perceived importance of nutrition when eating out.

  3. Characterization of fish assemblages and population structure of freshwater fish in two Tunisian reservoirs: implications for fishery management.

    PubMed

    Mili, Sami; Ennouri, Rym; Dhib, Amel; Laouar, Houcine; Missaoui, Hechmi; Aleya, Lotfi

    2016-06-01

    To monitor and assess the state of Tunisian freshwater fisheries, two surveys were undertaken at Ghezala and Lahjar reservoirs. Samples were taken in April and May 2013, a period when the fish catchability is high. The selected reservoirs have different surface areas and bathymetries. Using multi-mesh gill nets (EN 14575 amended) designed for sampling fish in lakes, standard fishing methods were applied to estimate species composition, abundance, biomass, and size distribution. Four species were caught in the two reservoirs: barbel, mullet, pike-perch, and roach. Fish abundance showed significant change according to sampling sites, depth strata, and the different mesh sizes used. From the reservoir to the tributary, it was concluded that fish biomass distribution was governed by depth and was most abundant in the upper water layers. Species size distribution differed significantly between the two reservoirs, exceeding the length at first maturity. Species composition and abundance were greater in Lahjar reservoir than in Ghezala. Both reservoirs require support actions to improve fish productivity.

  4. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    USGS Publications Warehouse

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  5. Fish assemblages

    USGS Publications Warehouse

    McGarvey, Daniel J.; Falke, Jeffrey A.; Li, Hiram W.; Li, Judith; Hauer, F. Richard; Lamberti, G.A.

    2017-01-01

    Methods to sample fishes in stream ecosystems and to analyze the raw data, focusing primarily on assemblage-level (all fish species combined) analyses, are presented in this chapter. We begin with guidance on sample site selection, permitting for fish collection, and information-gathering steps to be completed prior to conducting fieldwork. Basic sampling methods (visual surveying, electrofishing, and seining) are presented with specific instructions for estimating population sizes via visual, capture-recapture, and depletion surveys, in addition to new guidance on environmental DNA (eDNA) methods. Steps to process fish specimens in the field including the use of anesthesia and preservation of whole specimens or tissue samples (for genetic or stable isotope analysis) are also presented. Data analysis methods include characterization of size-structure within populations, estimation of species richness and diversity, and application of fish functional traits. We conclude with three advanced topics in assemblage-level analysis: multidimensional scaling (MDS), ecological networks, and loop analysis.

  6. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  8. A predictive approach to selecting the size of a clinical trial, based on subjective clinical opinion.

    PubMed

    Spiegelhalter, D J; Freedman, L S

    1986-01-01

    The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.

  9. EFFECT OF ENVIRONMENT ON GALAXIES' MASS-SIZE DISTRIBUTION: UNVEILING THE TRANSITION FROM OUTSIDE-IN TO INSIDE-OUT EVOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappellari, Michele

    2013-11-20

    The distribution of galaxies on the mass-size plane as a function of redshift or environment is a powerful test for galaxy formation models. Here we use integral-field stellar kinematics to interpret the variation of the mass-size distribution in two galaxy samples spanning extreme environmental densities. The samples are both identically and nearly mass-selected (stellar mass M {sub *} ≳ 6 × 10{sup 9} M {sub ☉}) and volume-limited. The first consists of nearby field galaxies from the ATLAS{sup 3D} parent sample. The second consists of galaxies in the Coma Cluster (Abell 1656), one of the densest environments for which good, resolvedmore » spectroscopy can be obtained. The mass-size distribution in the dense environment differs from the field one in two ways: (1) spiral galaxies are replaced by bulge-dominated disk-like fast-rotator early-type galaxies (ETGs), which follow the same mass-size relation and have the same mass distribution as in the field sample; (2) the slow-rotator ETGs are segregated in mass from the fast rotators, with their size increasing proportionally to their mass. A transition between the two processes appears around the stellar mass M {sub crit} ≈ 2 × 10{sup 11} M {sub ☉}. We interpret this as evidence for bulge growth (outside-in evolution) and bulge-related environmental quenching dominating at low masses, with little influence from merging. In contrast, significant dry mergers (inside-out evolution) and halo-related quenching drives the mass and size growth at the high-mass end. The existence of these two processes naturally explains the diverse size evolution of galaxies of different masses and the separability of mass and environmental quenching.« less

  10. QUANTIFYING HAZARDOUS SPECIES IN PARTICULATE MATTER DERIVED FROM FOSSIL-FUEL COMBUSTION

    EPA Science Inventory

    An analysis protocol that combines X-ray absorption near-edge structure spectroscopy with selective leaching has been developed to examine hazardous species in size- segregated particulate matter (PM) samples derived from the combustion of fossil fuels. The protocol has been used...

  11. Modeling end-use quality in U. S. soft wheat germplasm

    USDA-ARS?s Scientific Manuscript database

    End-use quality in soft wheat (Triticum aestivum L.) can be assessed by a wide array of measurements, generally categorized into grain, milling, and baking characteristics. Samples were obtained from four regional nurseries. Selected parameters included: test weight, kernel hardness, kernel size, ke...

  12. Integrative Analysis of Cancer Diagnosis Studies with Composite Penalization

    PubMed Central

    Liu, Jin; Huang, Jian; Ma, Shuangge

    2013-01-01

    Summary In cancer diagnosis studies, high-throughput gene profiling has been extensively conducted, searching for genes whose expressions may serve as markers. Data generated from such studies have the “large d, small n” feature, with the number of genes profiled much larger than the sample size. Penalization has been extensively adopted for simultaneous estimation and marker selection. Because of small sample sizes, markers identified from the analysis of single datasets can be unsatisfactory. A cost-effective remedy is to conduct integrative analysis of multiple heterogeneous datasets. In this article, we investigate composite penalization methods for estimation and marker selection in integrative analysis. The proposed methods use the minimax concave penalty (MCP) as the outer penalty. Under the homogeneity model, the ridge penalty is adopted as the inner penalty. Under the heterogeneity model, the Lasso penalty and MCP are adopted as the inner penalty. Effective computational algorithms based on coordinate descent are developed. Numerical studies, including simulation and analysis of practical cancer datasets, show satisfactory performance of the proposed methods. PMID:24578589

  13. A discrimination index for selecting markers of tumor growth dynamic across multiple cancer studies with a cure fraction.

    PubMed

    Rouam, Sigrid; Broët, Philippe

    2013-08-01

    To identify genomic markers with consistent effect on tumor dynamics across multiple cancer series, discrimination indices based on proportional hazards models can be used since they do not depend heavily on the sample size. However, the underlying assumption of proportionality of the hazards does not always hold, especially when the studied population is a mixture of cured and uncured patients, like in early-stage cancers. We propose a novel index that quantifies the capability of a genomic marker to separate uncured patients, according to their time-to-event outcomes. It allows to identify genomic markers characterizing tumor growth dynamic across multiple studies. Simulation results show that our index performs better than classical indices based on the Cox model. It is neither affected by the sample size nor the cure rate fraction. In a cross-study of early-stage breast cancers, the index allows to select genomic markers with a potential consistent effect on tumor growth dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Prevalence of HIV among Aboriginal and Torres Strait Islander Australians: a systematic review and meta-analysis.

    PubMed

    Graham, Simon; O'Connor, Catherine C; Morgan, Stephen; Chamberlain, Catherine; Hocking, Jane

    2017-06-01

    Background Aboriginal and Torres Strait Islanders (Aboriginal) are Australia's first peoples. Between 2006 and 2015, HIV notifications increased among Aboriginal people; however, among non-Aboriginal people, notifications remained relatively stable. This systematic review and meta-analysis aims to examine the prevalence of HIV among Aboriginal people overall and by subgroups. In November 2015, a search of PubMed and Web of Science, grey literature and abstracts from conferences was conducted. A study was included if it reported the number of Aboriginal people tested and those who tested positive for HIV. The following variables were extracted: gender; Aboriginal status; population group (men who have sex with men, people who inject drugs, adults, youth in detention and pregnant females) and geographical location. An assessment of between study heterogeneity (I 2 test) and within study bias (selection, measurement and sample size) was also conducted. Seven studies were included; all were cross-sectional study designs. The overall sample size was 3772 and the prevalence of HIV was 0.1% (I 2 =38.3%, P=0.136). Five studies included convenient samples of people attending Australian Needle and Syringe Program Centres, clinics, hospitals and a youth detention centre, increasing the potential of selection bias. Four studies had a sample size, thus decreasing the ability to report pooled estimates. The prevalence of HIV among Aboriginal people in Australia is low. Community-based programs that include both prevention messages for those at risk of infection and culturally appropriate clinical management and support for Aboriginal people living with HIV are needed to prevent HIV increasing among Aboriginal people.

  15. Sampling designs for contaminant temporal trend analyses using sedentary species exemplified by the snails Bellamya aeruginosa and Viviparus viviparus.

    PubMed

    Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders

    2017-10-01

    Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Estimating the breeding population of long-billed curlew in the United States

    USGS Publications Warehouse

    Stanley, T.R.; Skagen, S.K.

    2007-01-01

    Determining population size and long-term trends in population size for species of high concern is a priority of international, national, and regional conservation plans. Long-billed curlews (Numenius americanus) are a species of special concern in North America due to apparent declines in their population. Because long-billed curlews are not adequately monitored by existing programs, we undertook a 2-year study with the goals of 1) determining present long-billed curlew distribution and breeding population size in the United States and 2) providing recommendations for a long-term long-billed curlew monitoring protocol. We selected a stratified random sample of survey routes in 16 western states for sampling in 2004 and 2005, and we analyzed count data from these routes to estimate detection probabilities and abundance. In addition, we evaluated habitat along roadsides to determine how well roadsides represented habitat throughout the sampling units. We estimated there were 164,515 (SE = 42,047) breeding long-billed curlews in 2004, and 109,533 (SE = 31,060) breeding individuals in 2005. These estimates far exceed currently accepted estimates based on expert opinion. We found that habitat along roadsides was representative of long-billed curlew habitat in general. We make recommendations for improving sampling methodology, and we present power curves to provide guidance on minimum sample sizes required to detect trends in abundance.

  17. Intra-class correlation estimates for assessment of vitamin A intake in children.

    PubMed

    Agarwal, Girdhar G; Awasthi, Shally; Walter, Stephen D

    2005-03-01

    In many community-based surveys, multi-level sampling is inherent in the design. In the design of these studies, especially to calculate the appropriate sample size, investigators need good estimates of intra-class correlation coefficient (ICC), along with the cluster size, to adjust for variation inflation due to clustering at each level. The present study used data on the assessment of clinical vitamin A deficiency and intake of vitamin A-rich food in children in a district in India. For the survey, 16 households were sampled from 200 villages nested within eight randomly-selected blocks of the district. ICCs and components of variances were estimated from a three-level hierarchical random effects analysis of variance model. Estimates of ICCs and variance components were obtained at village and block levels. Between-cluster variation was evident at each level of clustering. In these estimates, ICCs were inversely related to cluster size, but the design effect could be substantial for large clusters. At the block level, most ICC estimates were below 0.07. At the village level, many ICC estimates ranged from 0.014 to 0.45. These estimates may provide useful information for the design of epidemiological studies in which the sampled (or allocated) units range in size from households to large administrative zones.

  18. Water-quality and biological data for selected streams, lakes, and wells in the High Point Lake watershed, Guilford County, North Carolina, 1988-89

    USGS Publications Warehouse

    Davenport, M.S.

    1993-01-01

    Water and bottom-sediment samples were collected at 26 sites in the 65-square-mile High Point Lake watershed area of Guilford County, North Carolina, from December 1988 through December 1989. Sampling locations included 10 stream sites, 8 lake sites, and 8 ground-water sites. Generally, six steady-flow samples were collected at each stream site and three storm samples were collected at five sites. Four lake samples and eight ground-water samples also were collected. Chemical analyses of stream and lake sediments and particle-size analyses of lake sediments were performed once during the study. Most stream and lake samples were analyzed for field characteristics, nutrients, major ions, trace elements, total organic carbon, and chemical-oxygen demand. Analyses were performed to detect concentrations of 149 selected organic compounds, including acid and base/neutral extractable and volatile constituents and carbamate, chlorophenoxy acid, triazine, organochlorine, and organophosphorus pesticides and herbicides. Selected lake samples were analyzed for all constituents listed in the Safe Drinking Water Act of 1986, including Giardia, Legionella, radiochemicals, asbestos, and viruses. Various chromatograms from organic analyses were submitted to computerized library searches. The results of these and all other analyses presented in this report are in tabular form.

  19. Radar Measurements of Small Debris from HUSIR and HAX

    NASA Technical Reports Server (NTRS)

    Hamilton J.; Blackwell, C.; McSheehy, R.; Juarez, Q.; Anz-Meador, P.

    2017-01-01

    For many years, the NASA Orbital Debris Program Office has been collecting measurements of the orbital debris environment from the Haystack Ultra-wideband Satellite Imaging Radar (HUSIR) and its auxiliary (HAX). These measurements sample the small debris population in low earth orbit (LEO). This paper will provide an overview of recent observations and highlight trends in selected debris populations. Using the NASA size estimation model, objects with a characteristic size of 1 cm and larger observed from HUSIR will be presented. Also, objects with a characteristic size of 2 cm and larger observed from HAX will be presented.

  20. The AzTEC/SMA Interferometric Imaging Survey of Submillimeter-selected High-redshift Galaxies

    NASA Astrophysics Data System (ADS)

    Younger, Joshua D.; Fazio, Giovanni G.; Huang, Jia-Sheng; Yun, Min S.; Wilson, Grant W.; Ashby, Matthew L. N.; Gurwell, Mark A.; Peck, Alison B.; Petitpas, Glen R.; Wilner, David J.; Hughes, David H.; Aretxaga, Itziar; Kim, Sungeun; Scott, Kimberly S.; Austermann, Jason; Perera, Thushara; Lowenthal, James D.

    2009-10-01

    We present results from a continuing interferometric survey of high-redshift submillimeter galaxies (SMGs) with the Submillimeter Array, including high-resolution (beam size ~2 arcsec) imaging of eight additional AzTEC 1.1 mm selected sources in the COSMOS field, for which we obtain six reliable (peak signal-to-noise ratio (S/N) >5 or peak S/N >4 with multiwavelength counterparts within the beam) and two moderate significance (peak S/N >4) detections. When combined with previous detections, this yields an unbiased sample of millimeter-selected SMGs with complete interferometric follow up. With this sample in hand, we (1) empirically confirm the radio-submillimeter association, (2) examine the submillimeter morphology—including the nature of SMGs with multiple radio counterparts and constraints on the physical scale of the far infrared—of the sample, and (3) find additional evidence for a population of extremely luminous, radio-dim SMGs that peaks at higher redshift than previous, radio-selected samples. In particular, the presence of such a population of high-redshift sources has important consequences for models of galaxy formation—which struggle to account for such objects even under liberal assumptions—and dust production models given the limited time since the big bang.

  1. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    NASA Astrophysics Data System (ADS)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using ESRI software (ArcGIS) extended by Hawth's Tools and later on its replacement the Geospatial Modelling Environment (GME). 88% of all desired points could actually be reached in the field and have been successfully sampled. Our results indicate that the sampled calibration and validation sets are representative for each other and could be successfully used as interpolation data for spatial prediction purposes. With respect to soil textural fractions, for instance, equal multivariate means and variance homogeneity were found for the two datasets as evidenced by significant (P > 0.05) Hotelling T²-test (2.3 with df1 = 3, df2 = 193) and Bartlett's test statistics (6.4 with df = 6). The multivariate prediction of clay, silt and sand content using a neural network residual cokriging approach reached an explained variance level of 56%, 47% and 63%. Thus, the presented case study is a successful example of considering readily available continuous information on soil forming factors such as geology and relief as stratifying variables for designing sampling schemes in digital soil mapping projects.

  2. The Role of Body Size in Mate Selection among African American Young Adults

    PubMed Central

    Simons, Leslie G.; Simons, Ronald L.

    2016-01-01

    A profusion of studies have demonstrated that body size is a major factor in mate selection for both men and women. The particular role played by weight, however, has been subject to some debate, particularly with respect to the types of body sizes deemed most attractive, and scholars have questioned the degree to which body size preferences are constant across groups. In this paper, we drew from two perspectives on this issue, Sexual Strategies Theory and what we termed the cultural variability perspective, and used survey data to examine how body size was associated with both casual dating and serious romantic relationships. We used a United States sample of 386 African American adolescents and young adults between ages 16 and 21, living in the Midwest and Southeast, and who were enrolled in either high school or college. Results showed that overweight women were more likely to report casually dating than women in the thinnest weight category. Body size was not related to dating status among men. Among women, the results suggest stronger support for the cultural variability argument than for Sexual Strategies Theory. Potential explanations for these findings are discussed. PMID:26973377

  3. Is the permeability of naturally fractured rocks scale dependent?

    NASA Astrophysics Data System (ADS)

    Azizmohammadi, Siroos; Matthäi, Stephan K.

    2017-09-01

    The equivalent permeability, keq of stratified fractured porous rocks and its anisotropy is important for hydrocarbon reservoir engineering, groundwater hydrology, and subsurface contaminant transport. However, it is difficult to constrain this tensor property as it is strongly influenced by infrequent large fractures. Boreholes miss them and their directional sampling bias affects the collected geostatistical data. Samples taken at any scale smaller than that of interest truncate distributions and this bias leads to an incorrect characterization and property upscaling. To better understand this sampling problem, we have investigated a collection of outcrop-data-based Discrete Fracture and Matrix (DFM) models with mechanically constrained fracture aperture distributions, trying to establish a useful Representative Elementary Volume (REV). Finite-element analysis and flow-based upscaling have been used to determine keq eigenvalues and anisotropy. While our results indicate a convergence toward a scale-invariant keq REV with increasing sample size, keq magnitude can have multi-modal distributions. REV size relates to the length of dilated fracture segments as opposed to overall fracture length. Tensor orientation and degree of anisotropy also converge with sample size. However, the REV for keq anisotropy is larger than that for keq magnitude. Across scales, tensor orientation varies spatially, reflecting inhomogeneity of the fracture patterns. Inhomogeneity is particularly pronounced where the ambient stress selectively activates late- as opposed to early (through-going) fractures. While we cannot detect any increase of keq with sample size as postulated in some earlier studies, our results highlight a strong keq anisotropy that influences scale dependence.

  4. Improving the quality of biomarker discovery research: the right samples and enough of them.

    PubMed

    Pepe, Margaret S; Li, Christopher I; Feng, Ziding

    2015-06-01

    Biomarker discovery research has yielded few biomarkers that validate for clinical use. A contributing factor may be poor study designs. The goal in discovery research is to identify a subset of potentially useful markers from a large set of candidates assayed on case and control samples. We recommend the PRoBE design for selecting samples. We propose sample size calculations that require specifying: (i) a definition for biomarker performance; (ii) the proportion of useful markers the study should identify (Discovery Power); and (iii) the tolerable number of useless markers amongst those identified (False Leads Expected, FLE). We apply the methodology to a study of 9,000 candidate biomarkers for risk of colon cancer recurrence where a useful biomarker has positive predictive value ≥ 30%. We find that 40 patients with recurrence and 160 without recurrence suffice to filter out 98% of useless markers (2% FLE) while identifying 95% of useful biomarkers (95% Discovery Power). Alternative methods for sample size calculation required more assumptions. Biomarker discovery research should utilize quality biospecimen repositories and include sample sizes that enable markers meeting prespecified performance characteristics for well-defined clinical applications to be identified. The scientific rigor of discovery research should be improved. ©2015 American Association for Cancer Research.

  5. Estimation of the Human Extrathoracic Deposition Fraction of Inhaled Particles Using a Polyurethane Foam Collection Substrate in an IOM Sampler.

    PubMed

    Sleeth, Darrah K; Balthaser, Susan A; Collingwood, Scott; Larson, Rodney R

    2016-03-07

    Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET₁) and the posterior nasal and oral passages (ET₂). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm-44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device.

  6. Estimation of the Human Extrathoracic Deposition Fraction of Inhaled Particles Using a Polyurethane Foam Collection Substrate in an IOM Sampler

    PubMed Central

    Sleeth, Darrah K.; Balthaser, Susan A.; Collingwood, Scott; Larson, Rodney R.

    2016-01-01

    Extrathoracic deposition of inhaled particles (i.e., in the head and throat) is an important exposure route for many hazardous materials. Current best practices for exposure assessment of aerosols in the workplace involve particle size selective sampling methods based on particle penetration into the human respiratory tract (i.e., inhalable or respirable sampling). However, the International Organization for Standardization (ISO) has recently adopted particle deposition sampling conventions (ISO 13138), including conventions for extrathoracic (ET) deposition into the anterior nasal passage (ET1) and the posterior nasal and oral passages (ET2). For this study, polyurethane foam was used as a collection substrate inside an inhalable aerosol sampler to provide an estimate of extrathoracic particle deposition. Aerosols of fused aluminum oxide (five sizes, 4.9 µm–44.3 µm) were used as a test dust in a low speed (0.2 m/s) wind tunnel. Samplers were placed on a rotating mannequin inside the wind tunnel to simulate orientation-averaged personal sampling. Collection efficiency data for the foam insert matched well to the extrathoracic deposition convention for the particle sizes tested. The concept of using a foam insert to match a particle deposition sampling convention was explored in this study and shows promise for future use as a sampling device. PMID:26959046

  7. Practical guidance on characterizing availability in resource selection functions under a use-availability design

    USGS Publications Warehouse

    Northrup, Joseph M.; Hooten, Mevin B.; Anderson, Charles R.; Wittemyer, George

    2013-01-01

    Habitat selection is a fundamental aspect of animal ecology, the understanding of which is critical to management and conservation. Global positioning system data from animals allow fine-scale assessments of habitat selection and typically are analyzed in a use-availability framework, whereby animal locations are contrasted with random locations (the availability sample). Although most use-availability methods are in fact spatial point process models, they often are fit using logistic regression. This framework offers numerous methodological challenges, for which the literature provides little guidance. Specifically, the size and spatial extent of the availability sample influences coefficient estimates potentially causing interpretational bias. We examined the influence of availability on statistical inference through simulations and analysis of serially correlated mule deer GPS data. Bias in estimates arose from incorrectly assessing and sampling the spatial extent of availability. Spatial autocorrelation in covariates, which is common for landscape characteristics, exacerbated the error in availability sampling leading to increased bias. These results have strong implications for habitat selection analyses using GPS data, which are increasingly prevalent in the literature. We recommend researchers assess the sensitivity of their results to their availability sample and, where bias is likely, take care with interpretations and use cross validation to assess robustness.

  8. Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation.

    PubMed

    Loktev, Mikhail; Soloviev, Oleg; Savenko, Svyatoslav; Vdovin, Gleb

    2011-07-15

    We report on the first results to our knowledge obtained with adaptable multiaperture imaging through turbulence on a horizontal atmospheric path. We show that the resolution can be improved by adaptively matching the size of the subaperture to the characteristic size of the turbulence. Further improvement is achieved by the deconvolution of a number of subimages registered simultaneously through multiple subapertures. Different implementations of multiaperture geometry, including pupil multiplication, pupil image sampling, and a plenoptic telescope, are considered. Resolution improvement has been demonstrated on a ∼550 m horizontal turbulent path, using a combination of aperture sampling, speckle image processing, and, optionally, frame selection. © 2011 Optical Society of America

  9. A Bayesian Approach to the Overlap Analysis of Epidemiologically Linked Traits.

    PubMed

    Asimit, Jennifer L; Panoutsopoulou, Kalliope; Wheeler, Eleanor; Berndt, Sonja I; Cordell, Heather J; Morris, Andrew P; Zeggini, Eleftheria; Barroso, Inês

    2015-12-01

    Diseases often cooccur in individuals more often than expected by chance, and may be explained by shared underlying genetic etiology. A common approach to genetic overlap analyses is to use summary genome-wide association study data to identify single-nucleotide polymorphisms (SNPs) that are associated with multiple traits at a selected P-value threshold. However, P-values do not account for differences in power, whereas Bayes' factors (BFs) do, and may be approximated using summary statistics. We use simulation studies to compare the power of frequentist and Bayesian approaches with overlap analyses, and to decide on appropriate thresholds for comparison between the two methods. It is empirically illustrated that BFs have the advantage over P-values of a decreasing type I error rate as study size increases for single-disease associations. Consequently, the overlap analysis of traits from different-sized studies encounters issues in fair P-value threshold selection, whereas BFs are adjusted automatically. Extensive simulations show that Bayesian overlap analyses tend to have higher power than those that assess association strength with P-values, particularly in low-power scenarios. Calibration tables between BFs and P-values are provided for a range of sample sizes, as well as an approximation approach for sample sizes that are not in the calibration table. Although P-values are sometimes thought more intuitive, these tables assist in removing the opaqueness of Bayesian thresholds and may also be used in the selection of a BF threshold to meet a certain type I error rate. An application of our methods is used to identify variants associated with both obesity and osteoarthritis. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.

  10. Parallel particle impactor - novel size-selective particle sampler for accurate fractioning of inhalable particles

    NASA Astrophysics Data System (ADS)

    Trakumas, S.; Salter, E.

    2009-02-01

    Adverse health effects due to exposure to airborne particles are associated with particle deposition within the human respiratory tract. Particle size, shape, chemical composition, and the individual physiological characteristics of each person determine to what depth inhaled particles may penetrate and deposit within the respiratory tract. Various particle inertial classification devices are available to fractionate airborne particles according to their aerodynamic size to approximate particle penetration through the human respiratory tract. Cyclones are most often used to sample thoracic or respirable fractions of inhaled particles. Extensive studies of different cyclonic samplers have shown, however, that the sampling characteristics of cyclones do not follow the entire selected convention accurately. In the search for a more accurate way to assess worker exposure to different fractions of inhaled dust, a novel sampler comprising several inertial impactors arranged in parallel was designed and tested. The new design includes a number of separated impactors arranged in parallel. Prototypes of respirable and thoracic samplers each comprising four impactors arranged in parallel were manufactured and tested. Results indicated that the prototype samplers followed closely the penetration characteristics for which they were designed. The new samplers were found to perform similarly for liquid and solid test particles; penetration characteristics remained unchanged even after prolonged exposure to coal mine dust at high concentration. The new parallel impactor design can be applied to approximate any monotonically decreasing penetration curve at a selected flow rate. Personal-size samplers that operate at a few L/min as well as area samplers that operate at higher flow rates can be made based on the suggested design. Performance of such samplers can be predicted with high accuracy employing well-established impaction theory.

  11. Patterns of MHC-dependent mate selection in humans and nonhuman primates: a meta-analysis.

    PubMed

    Winternitz, J; Abbate, J L; Huchard, E; Havlíček, J; Garamszegi, L Z

    2017-01-01

    Genes of the major histocompatibility complex (MHC) in vertebrates are integral for effective adaptive immune response and are associated with sexual selection. Evidence from a range of vertebrates supports MHC-based preference for diverse and dissimilar mating partners, but evidence from human mate choice studies has been disparate and controversial. Methodologies and sampling peculiarities specific to human studies make it difficult to know whether wide discrepancies in results among human populations are real or artefact. To better understand what processes may affect MHC-mediated mate choice across humans and nonhuman primates, we performed phylogenetically controlled meta-analyses using 58 effect sizes from 30 studies across seven primate species. Primates showed a general trend favouring more MHC-diverse mates, which was statistically significant for humans. In contrast, there was no tendency for MHC-dissimilar mate choice, and for humans, we observed effect sizes indicating selection of both MHC-dissimilar and MHC-similar mates. Focusing on MHC-similar effect sizes only, we found evidence that preference for MHC similarity was an artefact of population ethnic heterogeneity in observational studies but not among experimental studies with more control over sociocultural biases. This suggests that human assortative mating biases may be responsible for some patterns of MHC-based mate choice. Additionally, the overall effect sizes of primate MHC-based mating preferences are relatively weak (Fisher's Z correlation coefficient for dissimilarity Zr = 0.044, diversity Zr = 0.153), calling for careful sampling design in future studies. Overall, our results indicate that preference for more MHC-diverse mates is significant for humans and likely conserved across primates. © 2016 John Wiley & Sons Ltd.

  12. Alteration of histological gastritis after cure of Helicobacter pylori infection.

    PubMed

    Hojo, M; Miwa, H; Ohkusa, T; Ohkura, R; Kurosawa, A; Sato, N

    2002-11-01

    It is still disputed whether gastric atrophy or intestinal metaplasia improves after the cure of Helicobacter pylori infection. To clarify the histological changes after the cure of H. pylori infection through a literature survey. Fifty-one selected reports from 1066 relevant articles were reviewed. The extracted data were pooled according to histological parameters of gastritis based on the (updated) Sydney system. Activity improved more rapidly than inflammation. Eleven of 25 reports described significant improvement of atrophy. Atrophy was not improved in one of four studies with a large sample size (> 100 samples) and in two of five studies with a long follow-up period (> 12 months), suggesting that disagreement between the studies was not totally due to sample size or follow-up period. Methodological flaws, such as patient selection, and statistical analysis based on the assumption that atrophy improves continuously and generally in all patients might be responsible for the inconsistent results. Four of 28 studies described significant improvement of intestinal metaplasia [corrected]. Activity and inflammation were improved after the cure of H. pylori infection. Atrophy did not improve generally among all patients, but improved in certain patients. Improvement of intestinal metaplasia was difficult to analyse due to methodological problems including statistical power.

  13. Hindlimb muscle architecture in non-human great apes and a comparison of methods for analysing inter-species variation

    PubMed Central

    Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S

    2011-01-01

    By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000

  14. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  15. Why it is hard to find genes associated with social science traits: theoretical and empirical considerations.

    PubMed

    Chabris, Christopher F; Lee, James J; Benjamin, Daniel J; Beauchamp, Jonathan P; Glaeser, Edward L; Borst, Gregoire; Pinker, Steven; Laibson, David I

    2013-10-01

    We explain why traits of interest to behavioral scientists may have a genetic architecture featuring hundreds or thousands of loci with tiny individual effects rather than a few with large effects and why such an architecture makes it difficult to find robust associations between traits and genes. We conducted a genome-wide association study at 2 sites, Harvard University and Union College, measuring more than 100 physical and behavioral traits with a sample size typical of candidate gene studies. We evaluated predictions that alleles with large effect sizes would be rare and most traits of interest to social science are likely characterized by a lack of strong directional selection. We also carried out a theoretical analysis of the genetic architecture of traits based on R.A. Fisher's geometric model of natural selection and empirical analyses of the effects of selection bias and phenotype measurement stability on the results of genetic association studies. Although we replicated several known genetic associations with physical traits, we found only 2 associations with behavioral traits that met the nominal genome-wide significance threshold, indicating that physical and behavioral traits are mainly affected by numerous genes with small effects. The challenge for social science genomics is the likelihood that genes are connected to behavioral variation by lengthy, nonlinear, interactive causal chains, and unraveling these chains requires allying with personal genomics to take advantage of the potential for large sample sizes as well as continuing with traditional epidemiological studies.

  16. Single-Nucleotide-Polymorphism-Based Association Mapping of Dog Stereotypes

    PubMed Central

    Jones, Paul; Chase, Kevin; Martin, Alan; Davern, Pluis; Ostrander, Elaine A.; Lark, Karl G.

    2008-01-01

    Phenotypic stereotypes are traits, often polygenic, that have been stringently selected to conform to specific criteria. In dogs, Canis familiaris, stereotypes result from breed standards set for conformation, performance (behaviors), etc. As a consequence, phenotypic values measured on a few individuals are representative of the breed stereotype. We used DNA samples isolated from 148 dog breeds to associate SNP markers with breed stereotypes. Using size as a trait to test the method, we identified six significant quantitative trait loci (QTL) on five chromosomes that include candidate genes appropriate to regulation of size (e.g., IGF1, IGF2BP2 SMAD2, etc.). Analysis of other morphological stereotypes, also under extreme selection, identified many additional significant loci. Less well-documented data for behavioral stereotypes tentatively identified loci for herding, pointing, boldness, and trainability. Four significant loci were identified for longevity, a breed characteristic not under direct selection, but inversely correlated with breed size. The strengths and limitations of the approach are discussed as well as its potential to identify loci regulating the within-breed incidence of specific polygenic diseases. PMID:18505865

  17. Assessing readability formula differences with written health information materials: application, results, and recommendations.

    PubMed

    Wang, Lih-Wern; Miller, Michael J; Schmitt, Michael R; Wen, Frances K

    2013-01-01

    Readability formulas are often used to guide the development and evaluation of literacy-sensitive written health information. However, readability formula results may vary considerably as a result of differences in software processing algorithms and how each formula is applied. These variations complicate interpretations of reading grade level estimates, particularly without a uniform guideline for applying and interpreting readability formulas. This research sought to (1) identify commonly used readability formulas reported in the health care literature, (2) demonstrate the use of the most commonly used readability formulas on written health information, (3) compare and contrast the differences when applying common readability formulas to identical selections of written health information, and (4) provide recommendations for choosing an appropriate readability formula for written health-related materials to optimize their use. A literature search was conducted to identify the most commonly used readability formulas in health care literature. Each of the identified formulas was subsequently applied to word samples from 15 unique examples of written health information about the topic of depression and its treatment. Readability estimates from common readability formulas were compared based on text sample size, selection, formatting, software type, and/or hand calculations. Recommendations for their use were provided. The Flesch-Kincaid formula was most commonly used (57.42%). Readability formulas demonstrated variability up to 5 reading grade levels on the same text. The Simple Measure of Gobbledygook (SMOG) readability formula performed most consistently. Depending on the text sample size, selection, formatting, software, and/or hand calculations, the individual readability formula estimated up to 6 reading grade levels of variability. The SMOG formula appears best suited for health care applications because of its consistency of results, higher level of expected comprehension, use of more recent validation criteria for determining reading grade level estimates, and simplicity of use. To improve interpretation of readability results, reporting reading grade level estimates from any formula should be accompanied with information about word sample size, location of word sampling in the text, formatting, and method of calculation. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    NASA Astrophysics Data System (ADS)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  19. Size-selective sampling performance of six low-volume “total” suspended particulate (TSP) inlets

    EPA Science Inventory

    Several low-volume inlets (flow rates ≤ 16.7 liters per minute (Lpm)) are commercially available as components of low-cost, portable ambient particulate matter samplers. Because the inlets themselves do not contain internal fractionators, they are often assumed to representati...

  20. Measuring coverage in MNCH: design, implementation, and interpretation challenges associated with tracking vaccination coverage using household surveys.

    PubMed

    Cutts, Felicity T; Izurieta, Hector S; Rhoda, Dale A

    2013-01-01

    Vaccination coverage is an important public health indicator that is measured using administrative reports and/or surveys. The measurement of vaccination coverage in low- and middle-income countries using surveys is susceptible to numerous challenges. These challenges include selection bias and information bias, which cannot be solved by increasing the sample size, and the precision of the coverage estimate, which is determined by the survey sample size and sampling method. Selection bias can result from an inaccurate sampling frame or inappropriate field procedures and, since populations likely to be missed in a vaccination coverage survey are also likely to be missed by vaccination teams, most often inflates coverage estimates. Importantly, the large multi-purpose household surveys that are often used to measure vaccination coverage have invested substantial effort to reduce selection bias. Information bias occurs when a child's vaccination status is misclassified due to mistakes on his or her vaccination record, in data transcription, in the way survey questions are presented, or in the guardian's recall of vaccination for children without a written record. There has been substantial reliance on the guardian's recall in recent surveys, and, worryingly, information bias may become more likely in the future as immunization schedules become more complex and variable. Finally, some surveys assess immunity directly using serological assays. Sero-surveys are important for assessing public health risk, but currently are unable to validate coverage estimates directly. To improve vaccination coverage estimates based on surveys, we recommend that recording tools and practices should be improved and that surveys should incorporate best practices for design, implementation, and analysis.

  1. Novel Technology for Enrichment of Biomolecules from Cell-Free Body Fluids and Subsequent DNA Sizing.

    PubMed

    Patel, Vipulkumar; Celec, Peter; Grunt, Magdalena; Schwarzenbach, Heidi; Jenneckens, Ingo; Hillebrand, Timo

    2016-01-01

    Circulating cell-free DNA (ccfDNA) is a promising diagnostic tool and its size fractionation is of interest. However, kits for isolation of ccfDNA available on the market are designed for small volumes hence processing large sample volumes is laborious. We have tested a new method that enables enrichment of ccfDNA from large volumes of plasma and subsequently allows size-fractionation of isolated ccfDNA into two fractions with individually established cut-off levels of ccfDNA length. This method allows isolation of low-abundant DNA as well as separation of long and short DNA molecules. This procedure may be important e.g., in prenatal diagnostics and cancer research that have been already confirmed by our primary experiments. Here, we report the results of selective separation of 200- and 500-bp long synthetic DNA fragments spiked in plasma samples. Furthermore, we size-fractionated ccfDNA from the plasma of pregnant women and verified the prevalence of fetal ccfDNA in all fractions.

  2. Quality control considerations for size exclusion chromatography with online ICP-MS: a powerful tool for evaluating the size dependence of metal-organic matter complexation.

    PubMed

    McKenzie, Erica R; Young, Thomas M

    2013-01-01

    Size exclusion chromatography (SEC), which separates molecules based on molecular volume, can be coupled with online inductively coupled plasma mass spectrometry (ICP-MS) to explore size-dependent metal-natural organic matter (NOM) complexation. To make effective use of this analytical dual detector system, the operator should be mindful of quality control measures. Al, Cr, Fe, Se, and Sn all exhibited columnless attenuation, which indicated unintended interactions with system components. Based on signal-to-noise ratio and peak reproducibility between duplicate analyses of environmental samples, consistent peak time and height were observed for Mg, Cl, Mn, Cu, Br, and Pb. Al, V, Fe, Co, Ni, Zn, Se, Cd, Sn, and Sb were less consistent overall, but produced consistent measurements in select samples. Ultrafiltering and centrifuging produced similar peak distributions, but glass fiber filtration produced more high molecular weight (MW) peaks. Storage in glass also produced more high MW peaks than did plastic bottles.

  3. Particle Size Distribution of Heavy Metals and Magnetic Susceptibility in an Industrial Site.

    PubMed

    Ayoubi, Shamsollah; Soltani, Zeynab; Khademi, Hossein

    2018-05-01

    This study was conducted to explore the relationships between magnetic susceptibility and some soil heavy metals concentrations in various particle sizes in an industrial site, central Iran. Soils were partitioned into five fractions (< 28, 28-75, 75-150, 150-300, and 300-2000 µm). Heavy metals concentrations including Zn, Pb, Fe, Cu, Ni and Mn and magnetic susceptibility were determined in bulk soil samples and all fractions in 60 soil samples collected from the depth of 0-5 cm. The studied heavy metals except for Pb and Fe displayed a substantial enrichment in the < 28 µm. These two elements seemed to be independent of the selected size fractions. Magnetic minerals are specially linked with medium size fractions including 28-75, 75-150 and 150-300 µm. The highest correlations were found for < 28 µm and heavy metals followed by 150-300 µm fraction which are susceptible to wind erosion risk in an arid environment.

  4. A thermal emission spectral library of rock-forming minerals

    NASA Astrophysics Data System (ADS)

    Christensen, Philip R.; Bandfield, Joshua L.; Hamilton, Victoria E.; Howard, Douglas A.; Lane, Melissa D.; Piatek, Jennifer L.; Ruff, Steven W.; Stefanov, William L.

    2000-04-01

    A library of thermal infrared spectra of silicate, carbonate, sulfate, phosphate, halide, and oxide minerals has been prepared for comparison to spectra obtained from planetary and Earth-orbiting spacecraft, airborne instruments, and laboratory measurements. The emphasis in developing this library has been to obtain pure samples of specific minerals. All samples were hand processed and analyzed for composition and purity. The majority are 710-1000 μm particle size fractions, chosen to minimize particle size effects. Spectral acquisition follows a method described previously, and emissivity is determined to within 2% in most cases. Each mineral spectrum is accompanied by descriptive information in database form including compositional information, sample quality, and a comments field to describe special circumstances and unique conditions. More than 150 samples were selected to include the common rock-forming minerals with an emphasis on igneous and sedimentary minerals. This library is available in digital form and will be expanded as new, well-characterized samples are acquired.

  5. Minimal-assumption inference from population-genomic data

    NASA Astrophysics Data System (ADS)

    Weissman, Daniel; Hallatschek, Oskar

    Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.

  6. Nanoparticle formation of deposited Agn-clusters on free-standing graphene

    NASA Astrophysics Data System (ADS)

    Al-Hada, M.; Peters, S.; Gregoratti, L.; Amati, M.; Sezen, H.; Parisse, P.; Selve, S.; Niermann, T.; Berger, D.; Neeb, M.; Eberhardt, W.

    2017-11-01

    Size-selected Agn-clusters on unsupported graphene of a commercial Quantifoil sample have been investigated by surface and element-specific techniques such as transmission electron microscopy (TEM), spatially-resolved inner-shell X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES). An agglomeration of the highly mobile clusters into nm-sized Ag-nanodots of 2-3 nm is observed. Moreover, crystalline as well as non-periodic fivefold symmetric structures of the Ag-nanoparticles are evident by high-resolution TEM. Using a lognormal size-distribution as revealed by TEM, the measured positive binding energy shift of the air-exposed Ag-nanodots can be explained by the size-dependent dynamical liquid-drop model.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Under contract with the US Department of Energy (DE-AC22-92PCO0367), Pittsburgh Energy Technology Center, Radian Corporation has conducted a test program to collect and analyze size-fractionated stack gas particulate samples for selected inorganic hazardous air pollutants (HAPS). Specific goals of the program are (1) the collection of one-gram quantities of size-fractionated stack gas particulate matter for bulk (total) and surface chemical charactization, and (2) the determination of the relationship between particle size, bulk and surface (leachable) composition, and unit load. The information obtained from this program identifies the effects of unit load, particle size, and wet FGD system operation on themore » relative toxicological effects of exposure to particulate emissions.« less

  8. Persistent directional selection on body size and a resolution to the paradox of stasis.

    PubMed

    Rollinson, Njal; Rowe, Locke

    2015-09-01

    Directional selection on size is common but often fails to result in microevolution in the wild. Similarly, macroevolutionary rates in size are low relative to the observed strength of selection in nature. We show that many estimates of selection on size have been measured on juveniles, not adults. Further, parents influence juvenile size by adjusting investment per offspring. In light of these observations, we help resolve this paradox by suggesting that the observed upward selection on size is balanced by selection against investment per offspring, resulting in little or no net selection gradient on size. We find that trade-offs between fecundity and juvenile size are common, consistent with the notion of selection against investment per offspring. We also find that median directional selection on size is positive for juveniles but no net directional selection exists for adult size. This is expected because parent-offspring conflict exists over size, and juvenile size is more strongly affected by investment per offspring than adult size. These findings provide qualitative support for the hypothesis that upward selection on size is balanced by selection against investment per offspring, where parent-offspring conflict over size is embodied in the opposing signs of the two selection gradients. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.

  9. Sample preparation techniques for the determination of trace residues and contaminants in foods.

    PubMed

    Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M

    2007-06-15

    The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.

  10. Variable criteria sequential stopping rule: Validity and power with repeated measures ANOVA, multiple correlation, MANOVA and relation to Chi-square distribution.

    PubMed

    Fitts, Douglas A

    2017-09-21

    The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, α o , constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the α o remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held α o constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ 2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ 2 . Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.

  11. Authoritarian Parenting and Asian Adolescent School Performance: Insights from the US and Taiwan

    PubMed Central

    Pong, Suet-ling; Johnston, Jamie; Chen, Vivien

    2014-01-01

    Our study re-examines the relationship between parenting and school performance among Asian students. We use two sources of data: wave I of the Adolescent Health Longitudinal Survey (Add Health), and waves I and II of the Taiwan Educational Panel Survey (TEPS). Analysis using Add Health reveals that the Asian-American/European-American difference in the parenting–school performance relationship is due largely to differential sample sizes. When we select a random sample of European-American students comparable to the sample size of Asian-American students, authoritarian parenting also shows no effect for European-American students. Furthermore, analysis of TEPS shows that authoritarian parenting is negatively associated with children's school achievement, while authoritative parenting is positively associated. This result for Taiwanese Chinese students is similar to previous results for European-American students in the US. PMID:24850978

  12. Authoritarian Parenting and Asian Adolescent School Performance: Insights from the US and Taiwan.

    PubMed

    Pong, Suet-Ling; Johnston, Jamie; Chen, Vivien

    2010-01-01

    Our study re-examines the relationship between parenting and school performance among Asian students. We use two sources of data: wave I of the Adolescent Health Longitudinal Survey (Add Health), and waves I and II of the Taiwan Educational Panel Survey (TEPS). Analysis using Add Health reveals that the Asian-American/European-American difference in the parenting-school performance relationship is due largely to differential sample sizes. When we select a random sample of European-American students comparable to the sample size of Asian-American students, authoritarian parenting also shows no effect for European-American students. Furthermore, analysis of TEPS shows that authoritarian parenting is negatively associated with children's school achievement, while authoritative parenting is positively associated. This result for Taiwanese Chinese students is similar to previous results for European-American students in the US.

  13. Computer re-sampling for demographically representative user populations in anthropometry: a case of doorway and clear floor space widths.

    PubMed

    Paquet, Victor; Joseph, Caroline; D'Souza, Clive

    2012-01-01

    Anthropometric studies typically require a large number of individuals that are selected in a manner so that demographic characteristics that impact body size and function are proportionally representative of a user population. This sampling approach does not allow for an efficient characterization of the distribution of body sizes and functions of sub-groups within a population and the demographic characteristics of user populations can often change with time, limiting the application of the anthropometric data in design. The objective of this study is to demonstrate how demographically representative user populations can be developed from samples that are not proportionally representative in order to improve the application of anthropometric data in design. An engineering anthropometry problem of door width and clear floor space width is used to illustrate the value of the approach.

  14. Stochastic theory of size exclusion chromatography by the characteristic function approach.

    PubMed

    Dondi, Francesco; Cavazzini, Alberto; Remelli, Maurizio; Felinger, Attila; Martin, Michel

    2002-01-18

    A general stochastic theory of size exclusion chromatography (SEC) able to account for size dependence on both pore ingress and egress processes, moving zone dispersion and pore size distribution, was developed. The relationship between stochastic-chromatographic and batch equilibrium conditions are discussed and the fundamental role of the 'ergodic' hypothesis in establishing a link between them is emphasized. SEC models are solved by means of the characteristic function method and chromatographic parameters like plate height, peak skewness and excess are derived. The peak shapes are obtained by numerical inversion of the characteristic function under the most general conditions of the exploited models. Separate size effects on pore ingress and pore egress processes are investigated and their effects on both retention selectivity and efficiency are clearly shown. The peak splitting phenomenon and peak tailing due to incomplete sample sorption near to the exclusion limit is discussed. An SEC model for columns with two types of pores is discussed and several effects on retention selectivity and efficiency coming from pore size differences and their relative abundance are singled out. The relevance of moving zone dispersion on separation is investigated. The present approach proves to be general and able to account for more complex SEC conditions such as continuous pore size distributions and mixed retention mechanism.

  15. Predicting and Tracking Short Term Disease Progression in Amnestic Mild Cognitive Impairment Patients with Prodromal Alzheimer's Disease: Structural Brain Biomarkers.

    PubMed

    Marizzoni, Moira; Ferrari, Clarissa; Jovicich, Jorge; Albani, Diego; Babiloni, Claudio; Cavaliere, Libera; Didic, Mira; Forloni, Gianluigi; Galluzzi, Samantha; Hoffmann, Karl-Titus; Molinuevo, José Luis; Nobili, Flavio; Parnetti, Lucilla; Payoux, Pierre; Ribaldi, Federica; Rossini, Paolo Maria; Schönknecht, Peter; Soricelli, Andrea; Hensch, Tilman; Tsolaki, Magda; Visser, Pieter Jelle; Wiltfang, Jens; Richardson, Jill C; Bordet, Régis; Blin, Olivier; Frisoni, Giovanni B

    2018-06-09

    Early Alzheimer's disease (AD) detection using cerebrospinal fluid (CSF) biomarkers has been recommended as enrichment strategy for trials involving mild cognitive impairment (MCI) patients. To model a prodromal AD trial for identifying MRI structural biomarkers to improve subject selection and to be used as surrogate outcomes of disease progression. APOE ɛ4 specific CSF Aβ42/P-tau cut-offs were used to identify MCI with prodromal AD (Aβ42/P-tau positive) in the WP5-PharmaCog (E-ADNI) cohort. Linear mixed models were performed 1) with baseline structural biomarker, time, and biomarker×time interaction as factors to predict longitudinal changes in ADAS-cog13, 2) with Aβ42/P-tau status, time, and Aβ42/P-tau status×time interaction as factors to explain the longitudinal changes in MRI measures, and 3) to compute sample size estimation for a trial implemented with the selected biomarkers. Only baseline lateral ventricle volume was able to identify a subgroup of prodromal AD patients who declined faster (interaction, p = 0.003). Lateral ventricle volume and medial temporal lobe measures were the biomarkers most sensitive to disease progression (interaction, p≤0.042). Enrichment through ventricular volume reduced the sample size that a clinical trial would require from 13 to 76%, depending on structural outcome variable. The biomarker needing the lowest sample size was the hippocampal subfield GC-ML-DG (granule cells of molecular layer of the dentate gyrus) (n = 82 per arm to demonstrate a 20% atrophy reduction). MRI structural biomarkers can enrich prodromal AD with fast progressors and significantly decrease group size in clinical trials of disease modifying drugs.

  16. Elemental Analysis of Beryllium Samples Using a Microzond-EGP-10 Unit

    NASA Astrophysics Data System (ADS)

    Buzoverya, M. E.; Karpov, I. A.; Gorodnov, A. A.; Shishpor, I. V.; Kireycheva, V. I.

    2017-12-01

    Results concerning the structural and elemental analysis of beryllium samples obtained via different technologies using a Microzond-EGP-10 unit with the help of the PIXE and RBS methods are presented. As a result, the overall chemical composition and the nature of inclusions were determined. The mapping method made it possible to reveal the structural features of beryllium samples: to select the grains of the main substance having different size and chemical composition, to visualize the interfaces between the regions of different composition, and to describe the features of the distribution of impurities in the samples.

  17. Electrochemical synthesis of a surface-porous Mg70.5Al29.5 eutectic alloy in a neutral aqueous NaCl solution

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Li, Yong-gang; Wei, Ying-hui; Wei, Huan; Yan, Ze-ying; Hou, Li-feng

    2018-03-01

    A surface-porous Mg-Al eutectic alloy was fabricated at room temperature via electrochemical dealloying in a neutral, aqueous 0.6 M NaCl solution by controlling the applied potential and processing duration. Selective dissolution occurred on the alloy surface. The surface-porous formation mechanism is governed by the selective dissolution of the α-Mg phase, which leaves the Mg17Al12 phase as the porous layer framework. The pore characteristics (morphology, size, and distribution) of the dealloyed samples are inherited from the α-Mg phases of the precursor Mg70.5Al29.5 (at.%) alloy. Size control in the porous layer can be achieved by regulating the synthesis parameters.

  18. Characterization of hydrotreated Mayan and Wilmington vacuum tower bottoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearson, C.D.; Green, J.B.; Bhan, O.K.

    1989-04-01

    Mayan and Wilmington vacuum tower bottoms were hydrotreated at various severity levels in a batch autoclave with and without catalyst. Each of the feeds and the hydrotreated products was separated into acid-base (ABN) fraction using a unique non-aqueous ion exchange technique. The feeds, hydrotreated whole products, and the ABN fractions were characterized by determining their elemental and metal content. Selected samples were analyzed by size exclusion chromatography/inductively coupled plasma technique to determine molecular size distribution of various species.

  19. Early lexical characteristics of toddlers with cleft lip and palate.

    PubMed

    Hardin-Jones, Mary; Chapman, Kathy L

    2014-11-01

    Objective : To examine development of early expressive lexicons in toddlers with cleft palate to determine whether they differ from those of noncleft toddlers in terms of size and lexical selectivity. Design : Retrospective. Patients : A total of 37 toddlers with cleft palate and 22 noncleft toddlers. Main Outcome Measures : The groups were compared for size of expressive lexicon reported on the MacArthur Communicative Development Inventory and the percentage of words beginning with obstruents and sonorants produced in a language sample. Differences between groups in the percentage of word initial consonants correct on the language sample were also examined. Results : Although expressive vocabulary was comparable at 13 months of age for both groups, size of the lexicon for the cleft group was significantly smaller than that for the noncleft group at 21 and 27 months of age. Toddlers with cleft palate produced significantly more words beginning with sonorants and fewer words beginning with obstruents in their spontaneous speech samples. They were also less accurate when producing word initial obstruents compared with the noncleft group. Conclusions : Toddlers with cleft palate demonstrate a slower rate of lexical development compared with their noncleft peers. The preference that toddlers with cleft palate demonstrate for words beginning with sonorants could suggest they are selecting words that begin with consonants that are easier for them to produce. An alternative explanation might be that because these children are less accurate in the production of obstruent consonants, listeners may not always identify obstruents when they occur.

  20. Anti-Depressants, Suicide, and Drug Regulation

    ERIC Educational Resources Information Center

    Ludwig, Jens; Marcotte, Dave E.

    2005-01-01

    Policymakers are increasingly concerned that a relatively new class of anti-depressant drugs, selective serotonin re-uptake inhibitors (SSRI), may increase the risk of suicide for at least some patients, particularly children. Prior randomized trials are not informative on this question because of small sample sizes and other limitations. Using…

  1. Survey of Employers.

    ERIC Educational Resources Information Center

    European Social Fund, Dublin (Ireland).

    A study examined attitudes of Irish employers toward vocational training (VT) activities, state agencies responsible for administering VT, and the skills that employees would need in the future. Of a sample of 500 firms that were selected as being representative from the standpoints of size, sector, location, and form of ownership, 219 were…

  2. WITHIN AND BETWEEN-PERSON VARIATION IN ENVIRONMENTAL CONCENTRATIONS OF METALS, PAHS AND PESTICIDES MEASURED IN NHEXAS -MD

    EPA Science Inventory

    Results suggest that where information on variance components for a specific chemical in a specific media is not available, a chemical's compound class may provide guidance in selecting sample size and in apportioning resources between numbers of subjects and numbers of repeated ...

  3. Catholic High Schools and Their Finances, 1980.

    ERIC Educational Resources Information Center

    Bredeweg, Frank H.

    The information contained in this report was drawn from data provided by a national sample of 200 Catholic high schools. The schools were selected to reflect types (private, Catholic, diocesan, and parish schools), enrollment sizes, and geographic location. The report addresses these areas. First, information is provided to point out the financial…

  4. SHRM Work & Family Survey Report, 1992.

    ERIC Educational Resources Information Center

    Society for Human Resource Management, Alexandria, VA.

    In March 1992, a random sample of 5,600 human resource professionals was selected from the membership of the Society for Human Resource Management (SHRM) and surveyed regarding family issues in the workplace. Respondents were asked to provide information on the size and other characteristics of their organization and workplace practices, and were…

  5. Laser Surface Modification of H13 Die Steel using Different Laser Spot Sizes

    NASA Astrophysics Data System (ADS)

    Aqida, S. N.; Naher, S.; Brabazon, D.

    2011-05-01

    This paper presents a laser surface modification process of AISI H13 tool steel using three sizes of laser spot with an aim to achieve reduced grain size and surface roughness. A Rofin DC-015 diffusion-cooled CO2 slab laser was used to process AISI H13 tool steel samples. Samples of 10 mm diameter were sectioned to 100 mm length in order to process a predefined circumferential area. The parameters selected for examination were laser peak power, overlap percentage and pulse repetition frequency (PRF). Metallographic study and image analysis were done to measure the grain size and the modified surface roughness was measured using two-dimensional surface profilometer. From metallographic study, the smallest grain sizes measured by laser modified surface were between 0.51 μm and 2.54 μm. The minimum surface roughness, Ra, recorded was 3.0 μm. This surface roughness of the modified die steel is similar to the surface quality of cast products. The grain size correlation with hardness followed the findings correlate with Hall-Petch relationship. The potential found for increase in surface hardness represents an important method to sustain tooling life.

  6. An estimate of field size distributions for selected sites in the major grain producing countries

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.

    1977-01-01

    The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.

  7. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  8. An indirect estimation of the population size of students with high-risk behaviors in select universities of medical sciences: A network scale-up study.

    PubMed

    Sajjadi, Homeira; Jorjoran Shushtari, Zahra; Shati, Mohsen; Salimi, Yahya; Dejman, Masoomeh; Vameghi, Meroe; Karimi, Salahedin; Mahmoodi, Zohreh

    2018-01-01

    Network scale-up is one of the most important indirect methods of estimating the size of clandestine populations and people with high-risk behaviors. The present study is an indirect estimation of the population size of students with high-risk behaviors in select universities of medical sciences. A total of 801 students from two University of Medical Sciences at Tehran and Alborz University of Medical Sciences were selected through convenience sampling. Six subgroups of high-risk behaviors were examined in the study, including Tramadol use, cannabis use, opium use, alcohol consumption, extramarital heterosexual intercourse, and heterosexual intercourse in return for money. To estimate the social network size in the study population, each participant was asked to name their close student friends from the two select universities. Data were collected using a checklist designed for this purpose. The participants' mean number of close friends from the selected medical universities was C = 8.14 (CI: 7.54-8.75). Within these social networks, friends with extramarital heterosexual intercourse (5.53%) and friends who consumed alcohol (4.92%) had the highest frequency, and friends who used opium (0.33%) had the lowest frequency. The variables of age, gender, marital status, type of residence and academic degree were significantly related to the likelihood of having close friends with certain high-risk behaviors (P<0.001). According to the results obtained, alcohol consumption and extramarital heterosexual intercourse are very common among students. Special HIV prevention programs are therefore necessary for this age group.

  9. A Bayesian nonparametric method for prediction in EST analysis

    PubMed Central

    Lijoi, Antonio; Mena, Ramsés H; Prünster, Igor

    2007-01-01

    Background Expressed sequence tags (ESTs) analyses are a fundamental tool for gene identification in organisms. Given a preliminary EST sample from a certain library, several statistical prediction problems arise. In particular, it is of interest to estimate how many new genes can be detected in a future EST sample of given size and also to determine the gene discovery rate: these estimates represent the basis for deciding whether to proceed sequencing the library and, in case of a positive decision, a guideline for selecting the size of the new sample. Such information is also useful for establishing sequencing efficiency in experimental design and for measuring the degree of redundancy of an EST library. Results In this work we propose a Bayesian nonparametric approach for tackling statistical problems related to EST surveys. In particular, we provide estimates for: a) the coverage, defined as the proportion of unique genes in the library represented in the given sample of reads; b) the number of new unique genes to be observed in a future sample; c) the discovery rate of new genes as a function of the future sample size. The Bayesian nonparametric model we adopt conveys, in a statistically rigorous way, the available information into prediction. Our proposal has appealing properties over frequentist nonparametric methods, which become unstable when prediction is required for large future samples. EST libraries, previously studied with frequentist methods, are analyzed in detail. Conclusion The Bayesian nonparametric approach we undertake yields valuable tools for gene capture and prediction in EST libraries. The estimators we obtain do not feature the kind of drawbacks associated with frequentist estimators and are reliable for any size of the additional sample. PMID:17868445

  10. Accounting for Incomplete Species Detection in Fish Community Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta

    2013-01-01

    Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less

  11. Mechanisms of Laser-Induced Dissection and Transport of Histologic Specimens

    PubMed Central

    Vogel, Alfred; Lorenz, Kathrin; Horneffer, Verena; Hüttmann, Gereon; von Smolinski, Dorthe; Gebert, Andreas

    2007-01-01

    Rapid contact- and contamination-free procurement of histologic material for proteomic and genomic analysis can be achieved by laser microdissection of the sample of interest followed by laser-induced transport (laser pressure catapulting). The dynamics of laser microdissection and laser pressure catapulting of histologic samples of 80 μm diameter was investigated by means of time-resolved photography. The working mechanism of microdissection was found to be plasma-mediated ablation initiated by linear absorption. Catapulting was driven by plasma formation when tightly focused pulses were used, and by photothermal ablation at the bottom of the sample when defocused pulses producing laser spot diameters larger than 35 μm were used. With focused pulses, driving pressures of several hundred MPa accelerated the specimen to initial velocities of 100–300 m/s before they were rapidly slowed down by air friction. When the laser spot was increased to a size comparable to or larger than the sample diameter, both driving pressure and flight velocity decreased considerably. Based on a characterization of the thermal and optical properties of the histologic specimens and supporting materials used, we calculated the evolution of the heat distribution in the sample. Selected catapulted samples were examined by scanning electron microscopy or analyzed by real-time reverse-transcriptase polymerase chain reaction. We found that catapulting of dissected samples results in little collateral damage when the laser pulses are either tightly focused or when the laser spot size is comparable to the specimen size. By contrast, moderate defocusing with spot sizes up to one-third of the specimen diameter may involve significant heat and ultraviolet exposure. Potential side effects are maximal when samples are catapulted directly from a glass slide without a supporting polymer foil. PMID:17766336

  12. [Effects of soil trituration size on adsorption of oxytetracycline on soils].

    PubMed

    Qi, Rui-Huan; Li, Zhao-Jun; Long, Jian; Fan, Fei-Fei; Liang, Yong-Chao

    2011-02-01

    In order to understand the effects of soil trituration size on adsorption of oxytetracycline (OTC) on soils, two contrasting soils including moisture soil and purplish soil were selected to investigate adsorption of OTC on these soils, at the scales of no more than 0.20 mm, 0.84 mm, 0.25 mm and 0.15 mm, using the method of batch equilibrium experiments respectively. The results presented as the following: (1) Adsorption amount of OTC on moisture soil and purplish soil increased with the sampling time, and reached to equilibration at 24 h. First-order kinetic model, second-order kinetic model, parabolic-diffusion kinetic model, Elovich kinetic model, and two-constant kinetic model could be used to fit the changes in adsorption on soils with sampling time. Adsorption of OTC on two soils consisted of two processes such as quick adsorption and slow adsorption. Quick adsorption process happened during the period of 0-0.5 h. The adsorption rates of OTC on soils were higher at the small trituration size than those at the large trituration size, and at the same trituration size, the k(f) of purplish soil was about two times higher than those of moisture soil. (2) Adsorption isotherms of OTC on two soils with different trituration sizes were deviated from the linear model. The data were fitted well to Freundlich and Langmuir models, with the correlation coefficients between 0.956 and 0.999. The values of k(f) and q(m) for purplish soil were higher than those for moisture soil. At the same soil, adsorption amount of OTC increased with the decreases of soil trituration size. The results suggested that it is important to select the appropriate trituration size, based on the physical and chemical properties such as soil particle composition and so on, when the fate of antibiotics on soils was investigated.

  13. Detecting negative selection on recurrent mutations using gene genealogy

    PubMed Central

    2013-01-01

    Background Whether or not a mutant allele in a population is under selection is an important issue in population genetics, and various neutrality tests have been invented so far to detect selection. However, detection of negative selection has been notoriously difficult, partly because negatively selected alleles are usually rare in the population and have little impact on either population dynamics or the shape of the gene genealogy. Recently, through studies of genetic disorders and genome-wide analyses, many structural variations were shown to occur recurrently in the population. Such “recurrent mutations” might be revealed as deleterious by exploiting the signal of negative selection in the gene genealogy enhanced by their recurrence. Results Motivated by the above idea, we devised two new test statistics. One is the total number of mutants at a recurrently mutating locus among sampled sequences, which is tested conditionally on the number of forward mutations mapped on the sequence genealogy. The other is the size of the most common class of identical-by-descent mutants in the sample, again tested conditionally on the number of forward mutations mapped on the sequence genealogy. To examine the performance of these two tests, we simulated recurrently mutated loci each flanked by sites with neutral single nucleotide polymorphisms (SNPs), with no recombination. Using neutral recurrent mutations as null models, we attempted to detect deleterious recurrent mutations. Our analyses demonstrated high powers of our new tests under constant population size, as well as their moderate power to detect selection in expanding populations. We also devised a new maximum parsimony algorithm that, given the states of the sampled sequences at a recurrently mutating locus and an incompletely resolved genealogy, enumerates mutation histories with a minimum number of mutations while partially resolving genealogical relationships when necessary. Conclusions With their considerably high powers to detect negative selection, our new neutrality tests may open new venues for dealing with the population genetics of recurrent mutations as well as help identifying some types of genetic disorders that may have escaped identification by currently existing methods. PMID:23651527

  14. Methods of analysis by the U. S. Geological Survey National Water Quality Laboratory - determination of organonitrogen herbicides in water by solid-phase extraction and capillary-column gas chromatography/mass spectrometry with selected-ion monitoring

    USGS Publications Warehouse

    Sandstrom, Mark W.; Wydoski, Duane S.; Schroeder, Michael P.; Zamboni, Jana L.; Foreman, William T.

    1992-01-01

    A method for the isolation of organonitrogen herbicides from natural water samples using solid-phase extraction and analysis by capillary-column gas chromatography/mass spectrometry with selected-ion monitoring is described. Water samples are filtered to remove suspended particulate matter and then are pumped through disposable solid-phase extraction cartridges containing octadecyl-bonded porous silica to remove the herbicides. The cartridges are dried using carbon dioxide, and adsorbed herbicides are removed from the cartridges by elution with 1.8 milliliters of hexaneisopropanol (3:1). Extracts of the eluants are analyzed by capillary-column gas chromatography/mass spectrometry with selected-ion monitoring of at least three characteristic ions. The method detection limits are dependent on sample matrix and each particular herbicide. The method detection limits, based on a 100-milliliter sample size, range from 0.02 to 0.25 microgram per liter. Recoveries averaged 80 to 115 percent for the 23 herbicides and 2 metabolites in 1 reagent-water and 2 natural-water samples fortified at levels of 0.2 and 2.0 micrograms per liter.

  15. The Mars Orbital Catalog of Hydrated Alteration Signatures (MOCHAS) - Initial release

    NASA Astrophysics Data System (ADS)

    Carter, John; OMEGA and CRISM Teams

    2016-10-01

    Aqueous minerals have been identified from orbit at a number of localities, and their analysis allowed refining the water story of Early Mars. They are also a main science driver when selecting current and upcoming landing sites for roving missions.Available catalogs of mineral detections exhibit a number of drawbacks such as a limited sample size (a thousand sites at most), inhomogeneous sampling of the surface and of the investigation methods, and the lack of contextual information (e.g. spatial extent, morphological context). The MOCHAS project strives to address such limitations by providing a global, detailed survey of aqueous minerals on Mars based on 10 years of data from the OMEGA and CRISM imaging spectrometers. Contextual data is provided, including deposit sizes, morphology and detailed composition when available. Sampling biases are also addressed.It will be openly distributed in GIS-ready format and will be participative. For example, it will be possible for researchers to submit requests for specific mapping of regions of interest, or add/refine mineral detections.An initial release is scheduled in Fall 2016 and will feature a two orders of magnitude increase in sample size compared to previous studies.

  16. Variation in aluminum, iron, and particle concentrations in oxic ground-water samples collected by use of tangential-flow ultrafiltration with low-flow sampling

    USGS Publications Warehouse

    Szabo, Z.; Oden, J.H.; Gibs, J.; Rice, D.E.; Ding, Y.; ,

    2001-01-01

    Particulates that move with ground water and those that are artificially mobilized during well purging could be incorporated into water samples during collection and could cause trace-element concentrations to vary in unfiltered samples, and possibly in filtered samples (typically 0.45-um (micron) pore size) as well, depending on the particle-size fractions present. Therefore, measured concentrations may not be representative of those in the aquifer. Ground water may contain particles of various sizes and shapes that are broadly classified as colloids, which do not settle from water, and particulates, which do. In order to investigate variations in trace-element concentrations in ground-water samples as a function of particle concentrations and particle-size fractions, the U.S. Geological Survey, in cooperation with the U.S. Air Force, collected samples from five wells completed in the unconfined, oxic Kirkwood-Cohansey aquifer system of the New Jersey Coastal Plain. Samples were collected by purging with a portable pump at low flow (0.2-0.5 liters per minute and minimal drawdown, ideally less than 0.5 foot). Unfiltered samples were collected in the following sequence: (1) within the first few minutes of pumping, (2) after initial turbidity declined and about one to two casing volumes of water had been purged, and (3) after turbidity values had stabilized at less than 1 to 5 Nephelometric Turbidity Units. Filtered samples were split concurrently through (1) a 0.45-um pore size capsule filter, (2) a 0.45-um pore size capsule filter and a 0.0029-um pore size tangential-flow filter in sequence, and (3), in selected cases, a 0.45-um and a 0.05-um pore size capsule filter in sequence. Filtered samples were collected concurrently with the unfiltered sample that was collected when turbidity values stabilized. Quality-assurance samples consisted of sequential duplicates (about 25 percent) and equipment blanks. Concentrations of particles were determined by light scattering. Variations in concentrations aluminum and iron (1 -74 and 1-199 ug/L (micrograms per liter), respectively), common indicators of the presence of particulate-borne trace elements, were greatest in sample sets from individual wells with the greatest variations in turbidity and particle concentration. Differences in trace-element concentrations in sequentially collected unfiltered samples with variable turbidity were 5 to 10 times as great as those in concurrently collected samples that were passed through various filters. These results indicate that turbidity must be both reduced and stabilized even when low-flow sample-collection techniques are used in order to obtain water samples that do not contain considerable particulate artifacts. Currently (2001) available techniques need to be refined to ensure that the measured trace-element concentrations are representative of those that are mobile in the aquifer water.

  17. Statistical methods for efficient design of community surveys of response to noise: Random coefficients regression models

    NASA Technical Reports Server (NTRS)

    Tomberlin, T. J.

    1985-01-01

    Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.

  18. Measuring size evolution of distant, faint galaxies in the radio regime

    NASA Astrophysics Data System (ADS)

    Lindroos, L.; Knudsen, K. K.; Stanley, F.; Muxlow, T. W. B.; Beswick, R. J.; Conway, J.; Radcliffe, J. F.; Wrigley, N.

    2018-05-01

    We measure the evolution of sizes for star-forming galaxies as seen in 1.4 GHz continuum radio for z = 0-3. The measurements are based on combined VLA+MERLIN data of the Hubble Deep Field, and using a uv-stacking algorithm combined with model fitting to estimate the average sizes of galaxies. A sample of ˜1000 star-forming galaxies is selected from optical and near-infrared catalogues, with stellar masses M⊙ ≈ 1010-1011 M⊙ and photometric redshifts 0-3. The median sizes are parametrized for stellar mass M* = 5 × 1010 M⊙ as R_e = A× {}(H(z)/H(1.5))^{α _z}. We find that the median radio sizes evolve towards larger sizes at later times with αz = -1.1 ± 0.6, and A (the median size at z ≈ 1.5) is found to be 0.26^'' ± 0.07^'' or 2.3±0.6 kpc. The measured radio sizes are typically a factor of 2 smaller than those measure in the optical, and are also smaller than the typical H α sizes in the literature. This indicates that star formation, as traced by the radio continuum, is typically concentrated towards the centre of galaxies, for the sampled redshift range. Furthermore, the discrepancy of measured sizes from different tracers of star formation, indicates the need for models of size evolution to adopt a multiwavelength approach in the measurement of the sizes star-forming regions.

  19. Early-type galaxies: mass-size relation at z ˜ 1.3 for different environments

    NASA Astrophysics Data System (ADS)

    Raichoor, A.; Mei, S.; Stanford, S. A.; Holden, B. P.; Nakata, F.; Rosati, P.; Shankar, F.; Tanaka, M.; Ford, H.; Huertas-Company, M.; Illingworth, G.; Kodama, T.; Postman, M.; Rettura, A.; Blakeslee, J. P.; Demarco, R.; Jee, M. J.; White, R. L.

    2011-12-01

    We combine multi-wavelength data of the Lynx superstructure and GOODS/CDF-S to build a sample of 75 visually selected early-type galaxies (ETGs), spanning different environments (cluster/group/field) at z ˜ 1.3. By estimating their mass, age (SED fitting, with a careful attention to the stellar population model used) and size, we are able to probe the dependence on the environment of the mass-size relation. We find that, for ETGs with 10^{10} < M / M_⊙ < 10^{11.5}, (1) the mass-size relation in the field did not evolve overall from z ˜ 1.3 to present; (2) the mass-size relation in cluster/group environments at z ˜ 1.3 lies at smaller sizes than the local mass-size relation (R_{e,z ˜ 1.3}/R_{e,z = 0} ˜ 0.6-0.8).

  20. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  1. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    USGS Publications Warehouse

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  2. The influence of maximum running speed on eye size: a test of Leuckart's Law in mammals.

    PubMed

    Heard-Booth, Amber N; Kirk, E Christopher

    2012-06-01

    Vertebrate eye size is influenced by many factors, including body or head size, diet, and activity pattern. Locomotor speed has also been suggested to influence eye size in a relationship known as Leuckart's Law. Leuckart's Law proposes that animals capable of achieving fast locomotor speeds require large eyes to enhance visual acuity and avoid collisions with environmental obstacles. The selective influence of rapid flight has been invoked to explain the relatively large eyes of birds, but Leuckart's Law remains untested in nonavian vertebrates. This study investigates the relationship between eye size and maximum running speed in a diverse sample of mammals. Measures of axial eye diameter, maximum running speed, and body mass were collected from the published literature for 50 species from 10 mammalian orders. This analysis reveals that absolute eye size is significantly positively correlated with maximum running speed in mammals. Moreover, the relationship between eye size and running speed remains significant when the potentially confounding effects of body mass and phylogeny are statistically controlled. The results of this analysis are therefore consistent with the expectations of Leuckart's Law and demonstrate that faster-moving mammals have larger eyes than their slower-moving close relatives. Accordingly, we conclude that maximum running speed is one of several key selective factors that have influenced the evolution of eye size in mammals. Copyright © 2012 Wiley Periodicals, Inc.

  3. Marine sources of ice nucleating particles: results from phytoplankton cultures and samples collected at sea

    NASA Astrophysics Data System (ADS)

    Wilbourn, E.; Thornton, D.; Brooks, S. D.; Graff, J.

    2016-12-01

    The role of marine aerosols as ice nucleating particles is currently poorly understood. Despite growing interest, there are remarkably few ice nucleation measurements on representative marine samples. Here we present results of heterogeneous ice nucleation from laboratory studies and in-situ air and sea water samples collected during NAAMES (North Atlantic Aerosol and Marine Ecosystems Study). Thalassiosira weissflogii (CCMP 1051) was grown under controlled conditions in batch cultures and the ice nucleating activity depended on the growth phase of the cultures. Immersion freezing temperatures of the lab-grown diatoms were determined daily using a custom ice nucleation apparatus cooled at a set rate. Our results show that the age of the culture had a significant impact on ice nucleation temperature, with samples in stationary phase causing nucleation at -19.9 °C, approximately nine degrees warmer than the freezing temperature during exponential growth phase. Field samples gathered during the NAAMES II cruise in May 2016 were also tested for ice nucleating ability. Two types of samples were gathered. Firstly, whole cells were fractionated by size from surface seawater using a BD Biosciences Influx Cell Sorter (BD BS ISD). Secondly, aerosols were generated using the SeaSweep and subsequently size-selected using a PIXE Cascade Impactor. Samples were tested for the presence of ice nucleating particles (INP) using the technique described above. There were significant differences in the freezing temperature of the different samples; of the three sample types the lab-grown cultures tested during stationary phase froze at the warmest temperatures, followed by the SeaSweep samples (-25.6 °C) and the size-fractionated cell samples (-31.3 °C). Differences in ice nucleation ability may be due to size differences between the INP, differences in chemical composition of the sample, or some combination of these two factors. Results will be presented and atmospheric implications discussed.

  4. The effects of sample size on population genomic analyses--implications for the tests of neutrality.

    PubMed

    Subramanian, Sankar

    2016-02-20

    One of the fundamental measures of molecular genetic variation is the Watterson's estimator (θ), which is based on the number of segregating sites. The estimation of θ is unbiased only under neutrality and constant population growth. It is well known that the estimation of θ is biased when these assumptions are violated. However, the effects of sample size in modulating the bias was not well appreciated. We examined this issue in detail based on large-scale exome data and robust simulations. Our investigation revealed that sample size appreciably influences θ estimation and this effect was much higher for constrained genomic regions than that of neutral regions. For instance, θ estimated for synonymous sites using 512 human exomes was 1.9 times higher than that obtained using 16 exomes. However, this difference was 2.5 times for the nonsynonymous sites of the same data. We observed a positive correlation between the rate of increase in θ estimates (with respect to the sample size) and the magnitude of selection pressure. For example, θ estimated for the nonsynonymous sites of highly constrained genes (dN/dS < 0.1) using 512 exomes was 3.6 times higher than that estimated using 16 exomes. In contrast this difference was only 2 times for the less constrained genes (dN/dS > 0.9). The results of this study reveal the extent of underestimation owing to small sample sizes and thus emphasize the importance of sample size in estimating a number of population genomic parameters. Our results have serious implications for neutrality tests such as Tajima D, Fu-Li D and those based on the McDonald and Kreitman test: Neutrality Index and the fraction of adaptive substitutions. For instance, use of 16 exomes produced 2.4 times higher proportion of adaptive substitutions compared to that obtained using 512 exomes (24% vs 10 %).

  5. Mineralogy and grain size of surficial sediment from the Big Lost River drainage and vicinity, with chemical and physical characteristics of geologic materials from selected sites at the Idaho National Engineering Laboratory, Idaho

    USGS Publications Warehouse

    Bartholomay, R.C.; Knobel, L.L.; Davis, L.C.

    1989-01-01

    The U.S. Geological Survey 's Idaho National Engineering Laboratory project office, in cooperation with the U.S. Department of Energy, collected 35 samples of surficial sediments from the Big Lost River drainage and vicinity from July 1987 through August 1988 for analysis of grain-size distribution, bulk mineralogy, and clay mineralogy. Samples were collected from 11 sites in the channel and 5 sites in overbank deposits of the Big Lost River, 6 sites in the spreading areas that receive excess flow from the Big Lost River during peak flow conditions, 7 sites in the natural sinks and playas of the Big Lost River, 1 site in the Little Lost River Sink, and 5 sites from other small, isolated closed basins. Eleven samples from the Big Lost River channel deposits had a mean of 1.9 and median of 0.8 weight percent in the less than 0.062 mm fraction. The other 24 samples had a mean of 63.3 and median of 63.7 weight percent for the same size fraction. Mineralogy data are consistent with grain-size data. The Big Lost River channel deposits had mean and median percent mineral abundances of total clays and detrital mica of 10 and 10%, respectively, whereas the remaining 24 samples had mean and median values of 24% and 22.5% , respectively. (USGS)

  6. Structure and mechanical properties of parts obtained by selective laser melting of metal powder based on intermetallic compounds Ni3Al

    NASA Astrophysics Data System (ADS)

    Smelov, V. G.; Sotov, A. V.; Agapovichev, A. V.; Nosova, E. A.

    2018-03-01

    The structure and mechanical properties of samples are obtained from metal powder based on intermetallic compound by selective laser melting. The chemical analysis of the raw material and static tensile test of specimens were made. Change in the samples’ structure and mechanical properties after homogenization during four and twenty-four hours were investigated. A small-sized combustion chamber of a gas turbine engine was performed by the selective laser melting method. The print combustion chamber was subjected to the gas-dynamic test in a certain temperature and time range.

  7. Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.

    PubMed

    Kubitzki, Marcus B; de Groot, Bert L

    2007-06-15

    Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.

  8. Responses to two-way selection on growth in mass-spawned F1 progeny of Argopecten irradians concentricus (Say)

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Liu, Jin; Li, Yanhong; Zhu, Xiaowen; Liu, Zhigang

    2014-03-01

    In the present study, the effect of one-generation divergent selection on the growth and survival of the bay scallop ( Argopecten irradians concentricus) was examined to evaluate the efficacy of a selection program currently being carried out in Beibu Bay in the South China Sea. A total of 146 adult scallops were randomly selected from the same cultured population of A. i. concentricus, and divided into two groups in shell length (anterior-posterior measurement): large (4.91-6.02 cm, n=74) and small (3.31-4.18 cm, n=72). At the same time, a control group was also randomly sampled (4.21-4.88 cm, n =80). Mass-spawned F 1 progenies from the three size groups were obtained and reared under identical conditions at all growth phases. The effects of two-way (or upward-downward) selection on fertilization rate, hatching rate, survival rate, daily growth in shell length and body weight were assessed in the three size groups. Results show that significant differences ( P<0.01) were found in hatching rate, survival rate and daily growth of F1 progenies, but not in fertilization rate ( P>0.05), among the three groups. The hatching rate, survival rate and daily growth of the progeny of large-sized parents were greater than those of the control group ( P<0.05), which in turn were larger than those of small-sized group ( P<0.05). Responses to selection by shell length and body weight were 0.32 ± 0.04 cm and 2.18 ± 0.05 g, respectively, for the upward selection, and -0.14 ± 0.03 cm and -2.77 ± 0.06 g, respectively, for the downward selection. The realized heritability estimates of shell length and body weight were 0.38 ± 0.06 cm and 0.22 ± 0.07 g for the upward selection, and 0.24 ± 0.06 cm and 0.37 ± 0.09 g for the downward selection, respectively. The change in growth by bidirectional selection suggests that high genetic variation may be present in the cultured bay scallop population in China.

  9. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  10. The K-selected Butcher-Oemler Effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanford, S A; De Propris, R; Dickinson, M

    2004-03-02

    We investigate the Butcher-Oemler effect using samples of galaxies brighter than observed frame K* + 1.5 in 33 clusters at 0.1 {approx}< z {approx}< 0.9. We attempt to duplicate as closely as possible the methodology of Butcher & Oemler. Apart from selecting in the K-band, the most important difference is that we use a brightness limit fixed at 1.5 magnitudes below an observed frame K* rather than the nominal limit of rest frame M(V ) = -20 used by Butcher & Oemler. For an early type galaxy at z = 0.1 our sample cutoff is 0.2 magnitudes brighter than restmore » frame M(V ) = -20, while at z = 0.9 our cutoff is 0.9 magnitudes brighter. If the blue galaxies tend to be faint, then the difference in magnitude limits should result in our measuring lower blue fractions. A more minor difference from the Butcher & Oemler methodology is that the area covered by our galaxy samples has a radius of 0.5 or 0.7 Mpc at all redshifts rather than R{sub 30}, the radius containing 30% of the cluster population. In practice our field sizes are generally similar to those used by Butcher & Oemler. We find the fraction of blue galaxies in our K-selected samples to be lower on average than that derived from several optically selected samples, and that it shows little trend with redshift. However, at the redshifts z < 0.6 where our sample overlaps with that of Butcher & Oemler, the difference in fB as determined from our K-selected samples and those of Butcher & Oemler is much reduced. The large scatter in the measured f{sub B}, even in small redshift ranges, in our study indicates that determining the f{sub B} for a much larger sample of clusters from K-selected galaxy samples is important. As a test of our methods, our data allow us to construct optically-selected samples down to rest frame M(V ) = -20, as used by Butcher & Oemler, for four clusters that are common between our sample and that of Butcher & Oemler. For these rest V selected samples, we find similar fractions of blue galaxies to Butcher & Oemler, while the K selected samples for the same 4 clusters yield blue fractions which are typically half as large. This comparison indicates that selecting in the K-band is the primary difference between our study and previous optically-based studies of the Butcher & Oemler effect. Selecting in the observed K-band is more nearly a process of selecting galaxies by their mass than is the case for optically-selected samples. Our results suggest that the Butcher-Oemler effect is at least partly due to low mass galaxies whose optical luminosities are boosted. These lower mass galaxies could evolve into the rich dwarf population observed in nearby clusters.« less

  11. Choosing the Allometric Exponent in Covariate Model Building.

    PubMed

    Sinha, Jaydeep; Al-Sallami, Hesham S; Duffull, Stephen B

    2018-04-27

    Allometric scaling is often used to describe the covariate model linking total body weight (WT) to clearance (CL); however, there is no consensus on how to select its value. The aims of this study were to assess the influence of between-subject variability (BSV) and study design on (1) the power to correctly select the exponent from a priori choices, and (2) the power to obtain unbiased exponent estimates. The influence of WT distribution range (randomly sampled from the Third National Health and Nutrition Examination Survey, 1988-1994 [NHANES III] database), sample size (N = 10, 20, 50, 100, 200, 500, 1000 subjects), and BSV on CL (low 20%, normal 40%, high 60%) were assessed using stochastic simulation estimation. A priori exponent values used for the simulations were 0.67, 0.75, and 1, respectively. For normal to high BSV drugs, it is almost impossible to correctly select the exponent from an a priori set of exponents, i.e. 1 vs. 0.75, 1 vs. 0.67, or 0.75 vs. 0.67 in regular studies involving < 200 adult participants. On the other hand, such regular study designs are sufficient to appropriately estimate the exponent. However, regular studies with < 100 patients risk potential bias in estimating the exponent. Those study designs with limited sample size and narrow range of WT (e.g. < 100 adult participants) potentially risk either selection of a false value or yielding a biased estimate of the allometric exponent; however, such bias is only relevant in cases of extrapolating the value of CL outside the studied population, e.g. analysis of a study of adults that is used to extrapolate to children.

  12. Variable number of tandem repeat polymorphisms of DRD4: re-evaluation of selection hypothesis and analysis of association with schizophrenia

    PubMed Central

    Hattori, Eiji; Nakajima, Mizuho; Yamada, Kazuo; Iwayama, Yoshimi; Toyota, Tomoko; Saitou, Naruya; Yoshikawa, Takeo

    2009-01-01

    Associations have been reported between the variable number of tandem repeat (VNTR) polymorphisms in the exon 3 of dopamine D4 receptor gene gene and multiple psychiatric illnesses/traits. We examined the distribution of VNTR alleles of different length in a Japanese cohort and found that, as reported earlier, the size of allele ‘7R' was much rarer (0.5%) in Japanese than in Caucasian populations (∼20%). This presents a challenge to an earlier proposed hypothesis that positive selection favoring the allele 7R has contributed to its high frequency. To further address the issue of selection, we carried out sequencing of the VNTR region not only from human but also from chimpanzee samples, and made inference on the ancestral repeat motif and haplotype by use of a phylogenetic analysis program. The most common 4R variant was considered to be the ancestral haplotype as earlier proposed. However, in a gene tree of VNTR constructed on the basis of this inferred ancestral haplotype, the allele 7R had five descendent haplotypes in relatively long lineage, where genetic drift can have major influence. We also tested this length polymorphism for association with schizophrenia, studying two Japanese sample sets (one with 570 cases and 570 controls, and the other with 124 pedigrees). No evidence of association between the allele 7R and schizophrenia was found in any of the two data sets. Collectively, this study suggests that the VNTR variation does not have an effect large enough to cause either selection or a detectable association with schizophrenia in a study of samples of moderate size. PMID:19092778

  13. Mixture models for estimating the size of a closed population when capture rates vary among individuals

    USGS Publications Warehouse

    Dorazio, R.M.; Royle, J. Andrew

    2003-01-01

    We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.

  14. The ovenbird (Seiurus aurocapilla) as a model for testing food-value theory

    USGS Publications Warehouse

    Streby, Henry M.; Peterson, Sean M.; Scholtens, Brian; Monroe, Adrian; Andersen, David

    2013-01-01

    Food-value theory states that territorial animals space themselves such that each territory contains adequate food for rearing young. The ovenbird (Seiurus aurocapilla) is often cited as a species for which this hypothesis is supported because ovenbird territory size is inversely related to ground-invertebrate abundance within territories. However, little is known about juvenile ovenbird diet and whether food availability is accurately assessed using ground-sampling methods. We examined the relationship between ground-litter food availability and juvenile ovenbird diet in mixed northern hardwood-coniferous forests of north-central Minnesota. We sampled food availability with pitfall traps and litter samples, and concurrently sampled diet of juvenile ovenbirds from stomach samples. We found that juvenile ovenbirds were fed selectively from available food resources. In addition, we found that both ground-sampling methods greatly under-sampled forest caterpillars and snails, which together comprised 63% of juvenile ovenbird diet by mass. Combined with recent radio-telemetry findings that spot-mapping methods can poorly estimate territory size for forest songbirds, our results suggest that comparisons of spot-mapped ovenbird territories with ground-sampled invertebrate availability may not be reliable tests of food-value theory.

  15. Does the choice of nucleotide substitution models matter topologically?

    PubMed

    Hoff, Michael; Orf, Stefan; Riehm, Benedikt; Darriba, Diego; Stamatakis, Alexandros

    2016-03-24

    In the context of a master level programming practical at the computer science department of the Karlsruhe Institute of Technology, we developed and make available an open-source code for testing all 203 possible nucleotide substitution models in the Maximum Likelihood (ML) setting under the common Akaike, corrected Akaike, and Bayesian information criteria. We address the question if model selection matters topologically, that is, if conducting ML inferences under the optimal, instead of a standard General Time Reversible model, yields different tree topologies. We also assess, to which degree models selected and trees inferred under the three standard criteria (AIC, AICc, BIC) differ. Finally, we assess if the definition of the sample size (#sites versus #sites × #taxa) yields different models and, as a consequence, different tree topologies. We find that, all three factors (by order of impact: nucleotide model selection, information criterion used, sample size definition) can yield topologically substantially different final tree topologies (topological difference exceeding 10 %) for approximately 5 % of the tree inferences conducted on the 39 empirical datasets used in our study. We find that, using the best-fit nucleotide substitution model may change the final ML tree topology compared to an inference under a default GTR model. The effect is less pronounced when comparing distinct information criteria. Nonetheless, in some cases we did obtain substantial topological differences.

  16. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  17. Portion Sizes from 24-Hour Dietary Recalls Differed by Sex among Those Who Selected the Same Portion Size Category on a Food Frequency Questionnaire.

    PubMed

    Kang, Minji; Park, Song-Yi; Boushey, Carol J; Wilkens, Lynne R; Monroe, Kristine R; Le Marchand, Loïc; Kolonel, Laurence N; Murphy, Suzanne P; Paik, Hee-Young

    2018-05-08

    Accounting for sex differences in food portions may improve dietary measurement; however, this factor has not been well examined. The aim of this study was to examine sex differences in reported food portions from 24-hour dietary recalls (24HDRs) among those who selected the same portion size category on a quantitative food frequency questionnaire (QFFQ). This study was conducted with a cross-sectional design. Participants (n=319) were members of the Hawaii-Los Angeles Multiethnic Cohort who completed three 24HDRs and a QFFQ in a calibration study conducted in 2010 and 2011. Portions of individual foods reported from 24HDRs served as the outcome measures. Mean food portions from 24HDRs were compared between men and women who reported the same portion size on the QFFQ, after adjustment for race/ethnicity using a linear regression model. Actual amount and the assigned amount of the selected portion size in the QFFQ were compared using one-sample t test for men and women separately. Of 163 food items with portion size options listed in the QFFQ, 32 were reported in 24HDRs by ≥20 men and ≥20 women who selected the same portion size in the QFFQ. Although they chose the same portion size on the QFFQ, mean intake amounts from 24HDRs were significantly higher for men than for women for "beef/lamb/veal," "white rice," "brown/wild rice," "lettuce/tossed salad," "eggs cooked/raw," "whole wheat/rye bread," "buns/rolls," and "mayonnaise in sandwiches." For men, mean portions of 14 items from the 24HDRs were significantly different from the assigned amounts for QFFQ items (seven higher and seven lower), whereas for women, mean portions of 14 items were significantly lower from the assigned amounts (with five significantly higher). These sex differences in reported 24HDR food portions-even among participants who selected the same portion size on the QFFQ-suggest that the use of methods that account for differences in the portions consumed by men and women when QFFQs are quantified may provide more accurate absolute dietary intakes. Copyright © 2018 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  18. Novel hybrid cryo-radial method: an emerging alternative to CT-guided biopsy in suspected lung cancer. A prospective case series and description of technique.

    PubMed

    Herath, Samantha; Yap, Elaine

    2018-02-01

    In diagnosing peripheral pulmonary lesions (PPL), radial endobronchial ultrasound (R-EBUS) is emerging as a safer method in comparison to CT-guided biopsy. Despite the better safety profile, the yield of R-EBUS remains lower (73%) than CT-guided biopsy (90%) due to the smaller size of samples. We adopted a hybrid method by adding cryobiopsy via the R-EBUS Guide Sheath (GS) to produce larger, non-crushed samples to improve diagnostic capability and enhance molecular testing. We report six prospective patients who underwent this procedure in our institution. R-EBUS samples were obtained via conventional sampling methods (needle aspiration, forceps biopsy, and cytology brush), followed by a cryobiopsy. An endobronchial blocker was placed near the planned area of biopsy in advance and inflated post-biopsy to minimize the risk of bleeding in all patients. A chest X-ray was performed 1 h post-procedure. All the PPLs were visualized with R-EBUS. The mean diameter of cryobiopsy samples was twice the size of forceps biopsy samples. In four patients, cryobiopsy samples were superior in size and the number of malignant cells per high power filed and was the preferred sample selected for mutation analysis and molecular testing. There was no pneumothorax or significant bleeding to report. Cryobiopsy samples were consistently larger and were the preferred samples for molecular testing, with an increase in the diagnostic yield and reduction in the need for repeat procedures, without hindering the marked safety profile of R-EBUS. Using an endobronchial blocker improves the safety of this procedure.

  19. THE AzTEC/SMA INTERFEROMETRIC IMAGING SURVEY OF SUBMILLIMETER-SELECTED HIGH-REDSHIFT GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younger, Joshua D.; Fazio, Giovanni G.; Huang Jiasheng

    We present results from a continuing interferometric survey of high-redshift submillimeter galaxies (SMGs) with the Submillimeter Array, including high-resolution (beam size approx2 arcsec) imaging of eight additional AzTEC 1.1 mm selected sources in the COSMOS field, for which we obtain six reliable (peak signal-to-noise ratio (S/N) >5 or peak S/N >4 with multiwavelength counterparts within the beam) and two moderate significance (peak S/N >4) detections. When combined with previous detections, this yields an unbiased sample of millimeter-selected SMGs with complete interferometric follow up. With this sample in hand, we (1) empirically confirm the radio-submillimeter association, (2) examine the submillimeter morphology-includingmore » the nature of SMGs with multiple radio counterparts and constraints on the physical scale of the far infrared-of the sample, and (3) find additional evidence for a population of extremely luminous, radio-dim SMGs that peaks at higher redshift than previous, radio-selected samples. In particular, the presence of such a population of high-redshift sources has important consequences for models of galaxy formation-which struggle to account for such objects even under liberal assumptions-and dust production models given the limited time since the big bang.« less

  20. Development of a spatial sampling protocol using GIS to measure health disparities in Bobo-Dioulasso, Burkina Faso, a medium-sized African city.

    PubMed

    Kassié, Daouda; Roudot, Anna; Dessay, Nadine; Piermay, Jean-Luc; Salem, Gérard; Fournet, Florence

    2017-04-18

    Many cities in developing countries experience an unplanned and rapid growth. Several studies have shown that the irregular urbanization and equipment of cities produce different health risks and uneven exposure to specific diseases. Consequently, health surveys within cities should be carried out at the micro-local scale and sampling methods should try to capture this urban diversity. This article describes the methodology used to develop a multi-stage sampling protocol to select a population for a demographic survey that investigates health disparities in the medium-sized city of Bobo-Dioulasso, Burkina Faso. It is based on the characterization of Bobo-Dioulasso city typology by taking into account the city heterogeneity, as determined by analysis of the built environment and of the distribution of urban infrastructures, such as healthcare structures or even water fountains, by photo-interpretation of aerial photographs and satellite images. Principal component analysis and hierarchical ascendant classification were then used to generate the city typology. Five groups of spaces with specific profiles were identified according to a set of variables which could be considered as proxy indicators of health status. Within these five groups, four sub-spaces were randomly selected for the study. We were then able to survey 1045 households in all the selected sub-spaces. The pertinence of this approach is discussed regarding to classical sampling as random walk method for example. This urban space typology allowed to select a population living in areas representative of the uneven urbanization process, and to characterize its health status in regards to several indicators (nutritional status, communicable and non-communicable diseases, and anaemia). Although this method should be validated and compared with more established methods, it appears as an alternative in developing countries where geographic and population data are scarce.

  1. Classification of urine sediment based on convolution neural network

    NASA Astrophysics Data System (ADS)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  2. Data on microscale atmospheric pollution of Bolshoy Kamen town (Primorsky region, Russia)

    NASA Astrophysics Data System (ADS)

    Kholodov, Aleksei; Ugay, Sergey; Drozd, Vladimir; Maiss, Natalia; Golokhvast, Kirill

    2017-10-01

    The paper discusses the study of atmospheric particulate matter of Bolshoy Kamen town by means of laser granulometry of snow water samples. Snow sampling points were selected close to major enterprises, along the main streets and roads of the town and in the residential area. The near-ground layer of atmospheric air of the town contains particulate matter of three main size classes: under 10 microns, 10-50 microns and over 700 microns. It is shown that the atmosphere of this town is lightly polluted with particles under 10 μm (PM10). Only in 5 sampling points out of 11 we found microparticles potentially hazardous to human health in significant quantities - from 16.2% to 34.6%. On the most territory of the town large particles (over 400 μm) dominate reaching 79.2%. We can conclude that judging by the particle size analysis of snow water samples Bolshoy Kamen town can be considered safe in terms of presence of particles under 10 μm (PM10) in the atmosphere.

  3. Application of ultra-high energy hollow cathode helium-silver laser (224.3 nm) as Jc's, grain size surface's promoter for Ir-optimally doped-Mg0.94Ir0.06B2 superconductors

    NASA Astrophysics Data System (ADS)

    Elsabawy, Khaled M.; Fallatah, Ahmed M.; Alharthi, Salman S.

    2018-07-01

    For the first time high energy Helium-Silver laser which belongs to the category of metal-vapor lasers applied as microstructure promoter for optimally Ir-doped-MgB2sample. The Ir-optimally doped-Mg0.94Ir 0.06B2 superconducting sample was selected from previously published article for one of authors themselves. The samples were irradiated by a three different doses 1, 2 and 3 h from an ultrahigh energy He-Ag-Laser with average power of 103 W/cm2 at distance of 3 cm. Superconducting measurements and micro-structural features were investigated as function of He-Ag Laser irradiation doses. Results indicated that irradiations via an ultrahigh energy He-Ag-Laser promoted grains to lower sizes and consequently measured Jc's values enhanced and increased. Furthermore Tc-offsets for all irradiated samples are better than non-irradiated Mg0.94Ir 0.06B2.

  4. A low-volume cavity ring-down spectrometer for sample-limited applications

    NASA Astrophysics Data System (ADS)

    Stowasser, C.; Farinas, A. D.; Ware, J.; Wistisen, D. W.; Rella, C.; Wahl, E.; Crosson, E.; Blunier, T.

    2014-08-01

    In atmospheric and environmental sciences, optical spectrometers are used for the measurements of greenhouse gas mole fractions and the isotopic composition of water vapor or greenhouse gases. The large sample cell volumes (tens of milliliters to several liters) in commercially available spectrometers constrain the usefulness of such instruments for applications that are limited in sample size and/or need to track fast variations in the sample stream. In an effort to make spectrometers more suitable for sample-limited applications, we developed a low-volume analyzer capable of measuring mole fractions of methane and carbon monoxide based on a commercial cavity ring-down spectrometer. The instrument has a small sample cell (9.6 ml) and can selectively be operated at a sample cell pressure of 140, 45, or 20 Torr (effective internal volume of 1.8, 0.57, and 0.25 ml). We present the new sample cell design and the flow path configuration, which are optimized for small sample sizes. To quantify the spectrometer's usefulness for sample-limited applications, we determine the renewal rate of sample molecules within the low-volume spectrometer. Furthermore, we show that the performance of the low-volume spectrometer matches the performance of the standard commercial analyzers by investigating linearity, precision, and instrumental drift.

  5. Measuring Coverage in MNCH: Design, Implementation, and Interpretation Challenges Associated with Tracking Vaccination Coverage Using Household Surveys

    PubMed Central

    Cutts, Felicity T.; Izurieta, Hector S.; Rhoda, Dale A.

    2013-01-01

    Vaccination coverage is an important public health indicator that is measured using administrative reports and/or surveys. The measurement of vaccination coverage in low- and middle-income countries using surveys is susceptible to numerous challenges. These challenges include selection bias and information bias, which cannot be solved by increasing the sample size, and the precision of the coverage estimate, which is determined by the survey sample size and sampling method. Selection bias can result from an inaccurate sampling frame or inappropriate field procedures and, since populations likely to be missed in a vaccination coverage survey are also likely to be missed by vaccination teams, most often inflates coverage estimates. Importantly, the large multi-purpose household surveys that are often used to measure vaccination coverage have invested substantial effort to reduce selection bias. Information bias occurs when a child's vaccination status is misclassified due to mistakes on his or her vaccination record, in data transcription, in the way survey questions are presented, or in the guardian's recall of vaccination for children without a written record. There has been substantial reliance on the guardian's recall in recent surveys, and, worryingly, information bias may become more likely in the future as immunization schedules become more complex and variable. Finally, some surveys assess immunity directly using serological assays. Sero-surveys are important for assessing public health risk, but currently are unable to validate coverage estimates directly. To improve vaccination coverage estimates based on surveys, we recommend that recording tools and practices should be improved and that surveys should incorporate best practices for design, implementation, and analysis. PMID:23667334

  6. The King Pre-Retirement Checklist: Assessing Differences in Pre-Retirement Planning.

    ERIC Educational Resources Information Center

    Zitzow, Darryl; King, Donald N.

    In an effort to assess the retirement preparedness of Midwestern populations above the age of 28, the King Pre-Retirement Checklist was administered to a sampling of 458 persons randomly selected and proportionally stratified by geographic location and community size. Factors examined were financial, social, family cohesion, mobility/health,…

  7. EDUCATIONAL AND VOCATIONAL GOALS OF RURAL YOUTH IN THE SOUTH.

    ERIC Educational Resources Information Center

    SPERRY, IRWIN V.; AND OTHERS

    THE OBJECTIVES OF THE STUDY WERE TO--(1) COMPARE EDUCATIONAL GOALS OF RURAL YOUTH AND THEIR PARENTS AND (2) DETERMINE THE RELATIONSHIPS OF THE SIMILARITIES AND DIFFERENCES TO SUCH FACTORS AS GEOGRAPHIC AREA, STATE, SEX, LEVEL OF LIVING, RESIDENCE, FAMILY SIZE, AND CLUB MEMBERSHIP. A SURVEY SAMPLE, SELECTED FROM AN EQUIPARTITIONED UNIVERSE…

  8. Analyzing International Students' Study Anxiety in Higher Education

    ERIC Educational Resources Information Center

    Khoshlessan, Rezvan; Das, Kumer Pial

    2017-01-01

    The purpose of this study is to explore international students' study anxiety in a mid-sized public four-year university in Southeast Texas by comparing their existing study anxiety along lines of nationality, gender, age, major, degree, and stage of education. The subjects were selected using a convenience sample during the Spring of 2013. The…

  9. 10 CFR 431.325 - Units to be tested.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... EQUIPMENT Metal Halide Lamp Ballasts and Fixtures Test Procedures § 431.325 Units to be tested. For each basic model of metal halide lamp ballast selected for testing, a sample of sufficient size, no less than... energy efficiency calculated as the measured output power to the lamp divided by the measured input power...

  10. Health Literacy in College Students

    ERIC Educational Resources Information Center

    Ickes, Melinda J.; Cottrell, Randall

    2010-01-01

    Objective: The purpose of this study was to assess the health literacy levels, and the potential importance of healthy literacy, of college students. Participants: Courses were randomly selected from all upper level undergraduate courses at a large Research I university to obtain a sample size of N = 399. Methods: During the 2007-2008 school year,…

  11. How Broad Liberal Arts Training Produces Phd Economists: Carleton's Story

    ERIC Educational Resources Information Center

    Bourne, Jenny; Grawe, Nathan D.

    2015-01-01

    Several recent studies point to strong performance in economics PhD programs of graduates from liberal arts colleges. While every undergraduate program is unique and the likelihood of selection bias combines with small sample sizes to caution against drawing strong conclusions, the authors reflect on their experience at Carleton College to…

  12. The Relationships among Adult Affective Factors, Engagement in Science, and Scientific Competencies

    ERIC Educational Resources Information Center

    Tsai, Chun-Yen; Li, Yuh-Yuh; Cheng, Ying-Yao

    2017-01-01

    This study investigated the relationship among adult affective factors, engagement in science, and scientific competencies. Probability proportional to size sampling was used to select 504 participants between the ages of 18 and 70 years. Data were collected through individual face-to-face interviews. The results of hierarchical regression…

  13. Microsatellite markers reveal the below ground distribution of genets in two species of Rhizopogon forming tuberculate ectomycorrhizas on Douglas-fir.

    Treesearch

    Annette M. Kretzer; Susie Dunham; Randy Molina; Joseph W. Spatafora

    2003-01-01

    We have developed microsatellite markers for two sister species of Rhizopogon, R. vesiculosus and R. vinicolor (Boletales, Basidiomycota), and used selected markers to investigate genet size and distribution from ectomycorrhizal samples. Both species form ectomycorrhizas with tuberculate morphology on Douglas-fir (...

  14. Accommodation Decision Making for Postsecondary Students with Learning Disabilities: Individually Tailored or One Size Fits All?

    ERIC Educational Resources Information Center

    Weis, Robert; Dean, Emily L.; Osborne, Karen J.

    2016-01-01

    Clinicians uniformly recommend accommodations for college students with learning disabilities; however, we know very little about which accommodations they select and the validity of their recommendations. We examined the assessment documentation of a large sample of community college students receiving academic accommodations for learning…

  15. Effect of Study Design on Sample Size in Studies Intended to Evaluate Bioequivalence of Inhaled Short‐Acting β‐Agonist Formulations

    PubMed Central

    Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai

    2017-01-01

    Abstract Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3‐by‐1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration–recommended 3‐by‐1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3‐by‐1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90‐μg test dose and a 720‐μg reference dose (42% cost reduction). Combining a 180‐μg test dose and a 720‐μg reference dose produced an estimated 36% cost reduction. PMID:29281130

  16. The Influences of Soil Characteristics on Nest-Site Selection in Painted Turtles (Chrysemys picta)

    NASA Astrophysics Data System (ADS)

    Page, R.

    2017-12-01

    A variety of animals dig nests and lay their eggs in soil, leaving them to incubate and hatch without assistance from the parents. Nesting habitat is important for these organisms many of which exhibit temperature dependent sex determination (TSD) whereby the incubation temperature determines the sex of each hatchling. However, suitable nesting habitat may be limited due to anthropogenic activities and global temperature increases. Soil thermal properties are critical to these organisms and are positively correlated with water retention and soil carbon; carbon-rich soils result in higher incubation temperatures. We investigated nest-site selection in painted turtles (Chrysemys picta) inhabiting an anthropogenic pond in south central Pennsylvania. We surveyed for turtle nests and documented location, depth, width, temperature, canopy coverage, clutch size, and hatch success for a total of 31 turtle nests. To address the influence of soil carbon and particle size on nest selection, we analyzed samples collected from: 1) actual nests that were depredated, 2) false nests, incomplete nests aborted during digging prior to nest completion, and 3) randomized locations. Soil samples were separated into coarse, medium, and fine grain size fractions through a stack of sieves. Samples were combusted in a total carbon analyzer to measure weight percent organic carbon. We found that anthropogenic activity at this site has created homogenous, sandy, compacted soils at the uppermost layer that may limit females' access to appropriate nesting habitat. Turtle nesting activity was limited to a linear region north of the pond and was constrained by an impassable rail line. Relative to other studies, turtle nests were notably shallow (5.8±0.9 cm) and placed close to the pond. Compared to false nests and random locations, turtle-selected sites averaged greater coarse grains (35% compared to 20.24 and 20.57%) and less fine grains (47% compared to 59 and 59, respectively). Despite remarkably high soil carbon along the rail line (47.08%) turtles nested here with slightly higher hatch success. We suggest that the turtles are limited to sandy, compact soils with low heat capacities and may compensate for this by also nesting adjacent to the rail line where high soil carbon could increase incubation temperatures.

  17. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  18. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  19. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

  20. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.

    PubMed

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.

  1. Current trends and challenges in sample preparation for metallic nanoparticles analysis in daily products and environmental samples: A review

    NASA Astrophysics Data System (ADS)

    De la Calle, Inmaculada; Menta, Mathieu; Séby, Fabienne

    2016-11-01

    Due to the increasing use of nanoparticles (NPs) in consumer products, it becomes necessary to develop different strategies for their detection, identification, characterization and quantification in a wide variety of samples. Since the analysis of NPs in consumer products and environmental samples is particularly troublesome, a detailed description of challenges and limitations is given here. This review mainly focuses on sample preparation procedures applied for the mostly used techniques for metallic and metal oxide NPs characterization in consumer products and most outstanding publications of biological and environmental samples (from 2006 to 2015). We summarize the procedures applied for total metal content, extraction/separation and/or preconcentration of NPs from the matrix, separation of metallic NPs from their ions or from larger particles and NPs' size fractionation. Sample preparation procedures specifically for microscopy are also described. Selected applications in cosmetics, food, other consumer products, biological tissues and environmental samples are presented. Advantages and inconveniences of those procedures are considered. Moreover, selected simplified schemes for NPs sample preparation, as well as usual techniques applied are included. Finally, promising directions for further investigations are discussed.

  2. The effectiveness of increased apical enlargement in reducing intracanal bacteria.

    PubMed

    Card, Steven J; Sigurdsson, Asgeir; Orstavik, Dag; Trope, Martin

    2002-11-01

    It has been suggested that the apical portion of a root canal is not adequately disinfected by typical instrumentation regimens. The purpose of this study was to determine whether instrumentation to sizes larger than typically used would more effectively remove culturable bacteria from the canal. Forty patients with clinical and radiographic evidence of apical periodontitis were recruited from the endodontic clinic. Mandibular cuspids (n = 2), bicuspids (n = 11), and molars (mesial roots) (n = 27) were selected for the study. Bacterial sampling was performed upon access and after each of two consecutive instrumentations. The first instrumentation utilized 1% NaOCI and 0.04 taper ProFile rotary files. The cuspid and bicuspid canals were instrumented to a #8 size and the molar canals to a #7 size. The second instrumentation utilized LightSpeed files and 1% NaOCl irrigation for further enlargement of the apical third. Typically, molars were instrumented to size 60 and cuspid/bicuspid canals to size 80. Our findings show that 100% of the cuspid/bicuspid canals and 81.5% of the molar canals were rendered bacteria-free after the first instrumentation sizes. The molar results improved to 89% after the second instrumentation. Of the (59.3%) molar mesial canals without a clinically detectable communication, 93% were rendered bacteria-free with the first instrumentation. Using a Wilcoxon rank sum test, statistically significant differences (p < 0.0001) were found between the initial sample and the samples after the first and second instrumentations. The differences between the samples that followed the two instrumentation regimens were not significant (p = 0.0617). It is concluded that simple root canal systems (without multiple canal communications) may be rendered bacteria-free when preparation of this type is utilized.

  3. Quality testing of an innovative cascade separation system for multiple cell separation

    NASA Astrophysics Data System (ADS)

    Pierzchalski, Arkadiusz; Moszczynska, Aleksandra; Albrecht, Bernd; Heinrich, Jan-Michael; Tarnok, Attila

    2012-03-01

    Isolation of different cell types from mixed samples in one separation step by FACS is feasible but expensive and slow. It is cheaper and faster but still challenging by magnetic separation. An innovative bead-based cascade-system (pluriSelect GmbH, Leipzig, Germany) relies on simultaneous physical separation of different cell types. It is based on antibody-mediated binding of cells to beads of different size and isolation with sieves of different mesh-size. We validated pluriSelect system for single parameter (CD3) and simultaneous separation of CD3 and CD15 cells from EDTA blood-samples. Results were compared with those obtained by MACS (Miltenyi-Biotech) magnetic separation (CD3 separation). pluriSelect separation was done in whole blood, MACS on Ficoll gradient isolated leukocytes, according to the manufacturer's protocols. Isolated and residual cells were immunophenotyped (7-color 8-antibody panel (CD3; CD16/56; CD4; CD8; CD14; CD19; CD45; HLADR) on a CyFlowML flow cytometer (Partec GmbH). Cell count (Coulter), purity, yield and viability (7-AAD exclusion) were determined. There were no significant differences between both systems regarding purity (92-98%), yield (50-60%) and viability (92-98%) of isolated cells. PluriSelect separation was slightly faster than MACS (1.15 h versus 1.5h). Moreover, no preenrichment steps were necessary. In conclusion, pluriSelect is a fast, simple and gentle system for efficient simultaneous separation of two cell subpopulation directly from whole blood and can provide a simple alternative to FACS. The isolated cells can be used for further research applications.

  4. Oral cancer prognosis based on clinicopathologic and genomic markers using a hybrid of feature selection and machine learning methods

    PubMed Central

    2013-01-01

    Background Machine learning techniques are becoming useful as an alternative approach to conventional medical diagnosis or prognosis as they are good for handling noisy and incomplete data, and significant results can be attained despite a small sample size. Traditionally, clinicians make prognostic decisions based on clinicopathologic markers. However, it is not easy for the most skilful clinician to come out with an accurate prognosis by using these markers alone. Thus, there is a need to use genomic markers to improve the accuracy of prognosis. The main aim of this research is to apply a hybrid of feature selection and machine learning methods in oral cancer prognosis based on the parameters of the correlation of clinicopathologic and genomic markers. Results In the first stage of this research, five feature selection methods have been proposed and experimented on the oral cancer prognosis dataset. In the second stage, the model with the features selected from each feature selection methods are tested on the proposed classifiers. Four types of classifiers are chosen; these are namely, ANFIS, artificial neural network, support vector machine and logistic regression. A k-fold cross-validation is implemented on all types of classifiers due to the small sample size. The hybrid model of ReliefF-GA-ANFIS with 3-input features of drink, invasion and p63 achieved the best accuracy (accuracy = 93.81%; AUC = 0.90) for the oral cancer prognosis. Conclusions The results revealed that the prognosis is superior with the presence of both clinicopathologic and genomic markers. The selected features can be investigated further to validate the potential of becoming as significant prognostic signature in the oral cancer studies. PMID:23725313

  5. The isolation of salmonellas from British pork sausages and sausage meat.

    PubMed Central

    Roberts, D.; Boag, K.; Hall, M. L.; Shipp, C. R.

    1975-01-01

    Between 1969 and 1974, 1467 packets (3309 samples) of pork sausages and sausage meat produced by two large and two medium sized manufacturers and several local butchers were examined for the presence of salmonellas. Of these, 435 packets (786 samples) were found to contain salmonellas, but there was a wide variation in the isolation rates according to the producer. The salmonella incidence in samples from several small and two medium sized producers was low (0-11%) while the results from the two large producers investigated showed a striking difference, the rate of salmonella contamination in the product of one was low (about 2%) and in that of the other consistently high (40-60%). A comparison of liquid enrichment media, incubation temperatures and selective agar media was also carried out to determine the most efficient combination for the isolation of salmonellas from minced meat products. The results showed that (a) incubation of enrichment cultures at 43 degrees C. yielded a consistently greater number of salmonella isolations that at 37 degrees C., regardless of plating medium, (b) tetrathionate broth A (Rolfe) was superior to selenite broth as en enrichment medium at both 37 and 43 degrees C. and (c) brilliant green agar gave better results than deoxycholate citrate sucrose agar and bismuth sulphite agar as a selective medium. PMID:1100710

  6. The isolation of salmonellas from British pork sausages and sausage meat.

    PubMed

    Roberts, D; Boag, K; Hall, M L; Shipp, C R

    1975-10-01

    Between 1969 and 1974, 1467 packets (3309 samples) of pork sausages and sausage meat produced by two large and two medium sized manufacturers and several local butchers were examined for the presence of salmonellas. Of these, 435 packets (786 samples) were found to contain salmonellas, but there was a wide variation in the isolation rates according to the producer. The salmonella incidence in samples from several small and two medium sized producers was low (0-11%) while the results from the two large producers investigated showed a striking difference, the rate of salmonella contamination in the product of one was low (about 2%) and in that of the other consistently high (40-60%). A comparison of liquid enrichment media, incubation temperatures and selective agar media was also carried out to determine the most efficient combination for the isolation of salmonellas from minced meat products. The results showed that (a) incubation of enrichment cultures at 43 degrees C. yielded a consistently greater number of salmonella isolations that at 37 degrees C., regardless of plating medium, (b) tetrathionate broth A (Rolfe) was superior to selenite broth as en enrichment medium at both 37 and 43 degrees C. and (c) brilliant green agar gave better results than deoxycholate citrate sucrose agar and bismuth sulphite agar as a selective medium.

  7. The Joint Effects of Background Selection and Genetic Recombination on Local Gene Genealogies

    PubMed Central

    Zeng, Kai; Charlesworth, Brian

    2011-01-01

    Background selection, the effects of the continual removal of deleterious mutations by natural selection on variability at linked sites, is potentially a major determinant of DNA sequence variability. However, the joint effects of background selection and genetic recombination on the shape of the neutral gene genealogy have proved hard to study analytically. The only existing formula concerns the mean coalescent time for a pair of alleles, making it difficult to assess the importance of background selection from genome-wide data on sequence polymorphism. Here we develop a structured coalescent model of background selection with recombination and implement it in a computer program that efficiently generates neutral gene genealogies for an arbitrary sample size. We check the validity of the structured coalescent model against forward-in-time simulations and show that it accurately captures the effects of background selection. The model produces more accurate predictions of the mean coalescent time than the existing formula and supports the conclusion that the effect of background selection is greater in the interior of a deleterious region than at its boundaries. The level of linkage disequilibrium between sites is elevated by background selection, to an extent that is well summarized by a change in effective population size. The structured coalescent model is readily extendable to more realistic situations and should prove useful for analyzing genome-wide polymorphism data. PMID:21705759

  8. The joint effects of background selection and genetic recombination on local gene genealogies.

    PubMed

    Zeng, Kai; Charlesworth, Brian

    2011-09-01

    Background selection, the effects of the continual removal of deleterious mutations by natural selection on variability at linked sites, is potentially a major determinant of DNA sequence variability. However, the joint effects of background selection and genetic recombination on the shape of the neutral gene genealogy have proved hard to study analytically. The only existing formula concerns the mean coalescent time for a pair of alleles, making it difficult to assess the importance of background selection from genome-wide data on sequence polymorphism. Here we develop a structured coalescent model of background selection with recombination and implement it in a computer program that efficiently generates neutral gene genealogies for an arbitrary sample size. We check the validity of the structured coalescent model against forward-in-time simulations and show that it accurately captures the effects of background selection. The model produces more accurate predictions of the mean coalescent time than the existing formula and supports the conclusion that the effect of background selection is greater in the interior of a deleterious region than at its boundaries. The level of linkage disequilibrium between sites is elevated by background selection, to an extent that is well summarized by a change in effective population size. The structured coalescent model is readily extendable to more realistic situations and should prove useful for analyzing genome-wide polymorphism data.

  9. Accuracy of self- and parental perception of overweight among Latino preadolescents.

    PubMed

    Intagliata, Valentina; Ip, Edward H; Gesell, Sabina B; Barkin, Shari L

    2008-01-01

    This investigation examines self-perception and parental perception of child body size and factors associated with accurate parental perception of child body size. Latino at-risk for overweight (AROW) and/or overweight preadolescent children (ages 8-11 years) along with their parents were recruited (N=123 dyads). Children's body mass index (BMI) was measured but not discussed before participants were shown pictures of body sizes and asked to select the image that represented the child's body. The correlation between the child's body size selection and the child's actual BMI was 0.117 (p=0.20) whereas the correlation between the parent's assessment of the child's body size and the child's actual BMI was 0.470 (p<0.001). Logistic regression revealed that only parental education level (> or =college) was associated with a more accurate parental perception of their child's body size (OR: 0.11/95% CI: 0.01, 0.89) while child's sex, parental BMI, and parental health status were not associated with a perception that corresponded to the child's BMI. The sample was drawn from a single community clinic in Forsyth County which serves a large population of newer Latino immigrants in the county. The results indicate that (1) Latino AROW/overweight preadolescent children do not have an accurate perception of their own body size; (2) Latino parents have a more accurate perception of their child's body size with a moderately sized correlation suggesting that their perception of their child's body size is frequently inaccurate; and (3) Latino parents with higher education perceive their child's body size more accurately than less educated parents.

  10. Characterizing porosity of selected Early Palaeozoic shales from the Baltic Basin: organic petrology, gas adsorption and WIP and KIP approach.

    NASA Astrophysics Data System (ADS)

    Słomski, Piotr; Mastalerz, Maria; Szczepański, Jacek; Derkowski, Arkadiusz; Topór, Tomasz

    2017-04-01

    The porosity in the selected Ordovician and Silurian mudstones from the Baltic Basin collected from three wells (W1, M1, B1 and O3) was examined in a suite of 78 samples representing the Kopalino, Sasino, Prabuty, Pasłęk (including Jantar Member) and Pelplin Formations. Organic petrology, mineral composition along with N2 low-pressure adsorption (NLPA), water and kerosene immersion porosimetry (WIP and KIP, respectively) as well as image analysis techniques were used to determine pore volumes, pore sizes and pore-size distributions and to evaluate factors controlling porosity. The majority of the investigated samples represent argillaceous mudstones. Only a few samples from O3 and W1 are different lithologically and represent siliceous-argillaceous, calcareous, or calcareous-argillicaous mudstones. The samples are characterized by total organic carbon (TOC) content ranging from 0.13 to 7.20 wt. % and vitrinite reflectance (Ro) ranging from 1.02 to 1.22%, indicating late mature rocks within condensate - wet gas window. Total porosity measured using WIP is in the range from 4.6 % to 10 %, while KIP gave values from 1.5 % to 8.9 %. NLPA technique on the 75 µm size fraction revealed that mesopores area is in the range from 10.59 to 34.34 m2/g, while mesopores volume ranges from 0.024 to 0.062 cm3/g. Correlation between mesopores surface area and Ro is weak, but in general the surface area of mesopores is the largest in the least mature samples. Moreover, as indicated by gas adsorption data, both pores greater than 30 nm and smaller than 4 nm are important contributors to the total mesoporess surface area. In general, rather weak correlation between different mudstone constituents (including kerogen types) and porosity measured by means of various techniques (WIP, KIP and NLPA) reveal that there is no single factor controlling porosity in the investigated suite of samples. This conclusion is also confirmed by image analysis performed on large-scale high-resolution BSE images for selected representative samples. However, for mesopores, the dominant contribution comes from organic matter for the Jantar, Prabuty and Sasino Formations, as indicated by NLPA technique. Furthermore, importance of clay minerals for macropore volume is indicated by WIP and KIP technique. Acknowledgments: the study was supported from grant SHALESEQ (No PL12-0109) and SHALEMECH (No BG2/ShaleMech/14) funded by the National Centre for Research and Development.

  11. Meta-analysis and systematic review of the number of non-syndromic congenitally missing permanent teeth per affected individual and its influencing factors

    PubMed Central

    Rakhshan, Hamid

    2016-01-01

    Summary Background and purpose: Dental aplasia (or hypodontia) is a frequent and challenging anomaly and thus of interest to many dental fields. Although the number of missing teeth (NMT) in each person is a major clinical determinant of treatment need, there is no meta-analysis on this subject. Therefore, we aimed to investigate the relevant literature, including epidemiological studies and research on dental/orthodontic patients. Methods: Among 50 reports, the effects of ethnicities, regions, sample sizes/types, subjects’ minimum ages, journals’ scientific credit, publication year, and gender composition of samples on the number of missing permanent teeth (except the third molars) per person were statistically analysed (α = 0.05, 0.025, 0.01). Limitations: The inclusion of small studies and second-hand information might reduce the reliability. Nevertheless, these strategies increased the meta-sample size and favoured the generalisability. Moreover, data weighting was carried out to account for the effect of study sizes/precisions. Results: The NMT per affected person was 1.675 [95% confidence interval (CI) = 1.621–1.728], 1.987 (95% CI = 1.949–2.024), and 1.893 (95% CI = 1.864–1.923), in randomly selected subjects, dental/orthodontic patients, and both groups combined, respectively. The effects of ethnicities (P > 0.9), continents (P > 0.3), and time (adjusting for the population type, P = 0.7) were not significant. Dental/orthodontic patients exhibited a significantly greater NMT compared to randomly selected subjects (P < 0.012). Larger samples (P = 0.000) and enrolling younger individuals (P = 0.000) might inflate the observed NMT per person. Conclusions: Time, ethnic backgrounds, and continents seem unlikely influencing factors. Subjects younger than 13 years should be excluded. Larger samples should be investigated by more observers. PMID:25840586

  12. Evaluation of some selected vaccines and other biological products irradiated by gamma rays, electron beams and X-rays

    NASA Astrophysics Data System (ADS)

    May, J. C.; Rey, L.; Lee, Chi-Jen

    2002-03-01

    Molecular sizing potency results are presented for irradiated samples of one lot of Haemophilus b conjugate vaccine, pneumococcal polysaccharide type 6B and typhoid vi polysaccharide vaccine. The samples were irradiated (25 kGy) by gamma rays, electron beams and X-rays. IgG and IgM antibody response in mice test results (ELISA) are given for the Hib conjugate vaccine irradiated at 0°C or frozen in liquid nitrogen.

  13. Evolution of sociality by natural selection on variances in reproductive fitness: evidence from a social bee.

    PubMed

    Stevens, Mark I; Hogendoorn, Katja; Schwarz, Michael P

    2007-08-29

    The Central Limit Theorem (CLT) is a statistical principle that states that as the number of repeated samples from any population increase, the variance among sample means will decrease and means will become more normally distributed. It has been conjectured that the CLT has the potential to provide benefits for group living in some animals via greater predictability in food acquisition, if the number of foraging bouts increases with group size. The potential existence of benefits for group living derived from a purely statistical principle is highly intriguing and it has implications for the origins of sociality. Here we show that in a social allodapine bee the relationship between cumulative food acquisition (measured as total brood weight) and colony size accords with the CLT. We show that deviations from expected food income decrease with group size, and that brood weights become more normally distributed both over time and with increasing colony size, as predicted by the CLT. Larger colonies are better able to match egg production to expected food intake, and better able to avoid costs associated with producing more brood than can be reared while reducing the risk of under-exploiting the food resources that may be available. These benefits to group living derive from a purely statistical principle, rather than from ecological, ergonomic or genetic factors, and could apply to a wide variety of species. This in turn suggests that the CLT may provide benefits at the early evolutionary stages of sociality and that evolution of group size could result from selection on variances in reproductive fitness. In addition, they may help explain why sociality has evolved in some groups and not others.

  14. Little Evidence That Time in Child Care Causes Externalizing Problems During Early Childhood in Norway

    PubMed Central

    Zachrisson, Henrik Daae; Dearing, Eric; Lekhal, Ratib; Toppelberg, Claudio O.

    2012-01-01

    Associations between maternal reports of hours in child care and children’s externalizing problems at 18 and 36 months of age were examined in a population-based Norwegian sample (n = 75,271). Within a sociopolitical context of homogenously high-quality child care, there was little evidence that high quantity of care causes externalizing problems. Using conventional approaches to handling selection bias and listwise deletion for substantial attrition in this sample, more hours in care predicted higher problem levels, yet with small effect sizes. The finding, however, was not robust to using multiple imputation for missing values. Moreover, when sibling and individual fixed-effects models for handling selection bias were used, no relation between hours and problems was evident. PMID:23311645

  15. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Prediction of Depression in Cancer Patients With Different Classification Criteria, Linear Discriminant Analysis versus Logistic Regression.

    PubMed

    Shayan, Zahra; Mohammad Gholi Mezerji, Naser; Shayan, Leila; Naseri, Parisa

    2015-11-03

    Logistic regression (LR) and linear discriminant analysis (LDA) are two popular statistical models for prediction of group membership. Although they are very similar, the LDA makes more assumptions about the data. When categorical and continuous variables used simultaneously, the optimal choice between the two models is questionable. In most studies, classification error (CE) is used to discriminate between subjects in several groups, but this index is not suitable to predict the accuracy of the outcome. The present study compared LR and LDA models using classification indices. This cross-sectional study selected 243 cancer patients. Sample sets of different sizes (n = 50, 100, 150, 200, 220) were randomly selected and the CE, B, and Q classification indices were calculated by the LR and LDA models. CE revealed the a lack of superiority for one model over the other, but the results showed that LR performed better than LDA for the B and Q indices in all situations. No significant effect for sample size on CE was noted for selection of an optimal model. Assessment of the accuracy of prediction of real data indicated that the B and Q indices are appropriate for selection of an optimal model. The results of this study showed that LR performs better in some cases and LDA in others when based on CE. The CE index is not appropriate for classification, although the B and Q indices performed better and offered more efficient criteria for comparison and discrimination between groups.

  17. Alternative sample sizes for verification dose experiments and dose audits

    NASA Astrophysics Data System (ADS)

    Taylor, W. A.; Hansen, J. M.

    1999-01-01

    ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.

  18. Ceramic Technology for Advanced Heat Engines Project Semiannual Progress Report for Period October 1985 Through March 1986

    DTIC Science & Technology

    1986-08-01

    materials (2.2 w/o and 3.0 w/o MgO). The other two batches (2.8 w/o and 3.1 w/o MgO), of higher purity, were made using E-10 zirconia powder from...CID) powders Two methods have been used for the coprecipitation of doped zirconia powders from solutions of chemical precursors. (4) Method I, for...of powder, approximate sample size 3.2 Kg (6.4 Kg for zirconia powder ); 342 3. Random selection of sample; 4. Partial drying of sample to reduce caking

  19. Aqueous nitrite ion determination by selective reduction and gas phase nitric oxide chemiluminescence

    NASA Technical Reports Server (NTRS)

    Dunham, A. J.; Barkley, R. M.; Sievers, R. E.; Clarkson, T. W. (Principal Investigator)

    1995-01-01

    An improved method of flow injection analysis for aqueous nitrite ion exploits the sensitivity and selectivity of the nitric oxide (NO) chemilluminescence detector. Trace analysis of nitrite ion in a small sample (5-160 microL) is accomplished by conversion of nitrite ion to NO by aqueous iodide in acid. The resulting NO is transported to the gas phase through a semipermeable membrane and subsequently detected by monitoring the photoemission of the reaction between NO and ozone (O3). Chemiluminescence detection is selective for measurement of NO, and, since the detection occurs in the gas-phase, neither sample coloration nor turbidity interfere. The detection limit for a 100-microL sample is 0.04 ppb of nitrite ion. The precision at the 10 ppb level is 2% relative standard deviation, and 60-180 samples can be analyzed per hour. Samples of human saliva and food extracts were analyzed; the results from a standard colorimetric measurement are compared with those from the new chemiluminescence method in order to further validate the latter method. A high degree of selectivity is obtained due to the three discriminating steps in the process: (1) the nitrite ion to NO conversion conditions are virtually specific for nitrite ion, (2) only volatile products of the conversion will be swept to the gas phase (avoiding turbidity or color in spectrophotometric methods), and (3) the NO chemiluminescence detector selectively detects the emission from the NO + O3 reaction. The method is free of interferences, offers detection limits of low parts per billion of nitrite ion, and allows the analysis of up to 180 microL-sized samples per hour, with little sample preparation and no chromatographic separation. Much smaller samples can be analyzed by this method than in previously reported batch analysis methods, which typically require 5 mL or more of sample and often need chromatographic separations as well.

  20. Statistical Methods in Assembly Quality Management of Multi-Element Products on Automatic Rotor Lines

    NASA Astrophysics Data System (ADS)

    Pries, V. V.; Proskuriakov, N. E.

    2018-04-01

    To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.

  1. Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust

    PubMed Central

    Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin

    2015-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881

  2. Consideration of kaolinite interference correction for quartz measurements in coal mine dust.

    PubMed

    Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin

    2013-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.

  3. Effect of Hot Isostatic Pressing and Powder Feedstock on Porosity, Microstructure, and Mechanical Properties of Selective Laser Melted AlSi10Mg

    DOE PAGES

    Finfrock, Christopher B.; Exil, Andrea; Carroll, Jay D.; ...

    2018-06-06

    AlSi10Mg tensile bars were additively manufactured using the powder-bed selective laser melting process. Samples were subjected to stress relief annealing and hot isostatic pressing. Tensile samples built using fresh, stored, and reused powder feedstock were characterized for microstructure, porosity, and mechanical properties. Fresh powder exhibited the best mechanical properties and lowest porosity while stored and reused powder exhibited inferior mechanical properties and higher porosity. The microstructure of stress relieved samples was fine and exhibited (001) texture in the z-build direction. Microstructure for hot isostatic pressed samples was coarsened with fainter (001) texture. To investigate surface and interior defects, scanning electronmore » microscopy, optical fractography, and laser scanning microscopy techniques were employed. Hot isostatic pressing eliminated internal pores and reduced the size of surface porosity associated with the selective laser melting process. Hot isostatic pressing tended to increase ductility at the expense of decreasing strength. Furthermore, scatter in ductility of hot isostatic pressed parts suggests that the presence of unclosed surface porosity facilitated fracture with crack propagation inward from the surface of the part.« less

  4. Effect of Hot Isostatic Pressing and Powder Feedstock on Porosity, Microstructure, and Mechanical Properties of Selective Laser Melted AlSi10Mg

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finfrock, Christopher B.; Exil, Andrea; Carroll, Jay D.

    AlSi10Mg tensile bars were additively manufactured using the powder-bed selective laser melting process. Samples were subjected to stress relief annealing and hot isostatic pressing. Tensile samples built using fresh, stored, and reused powder feedstock were characterized for microstructure, porosity, and mechanical properties. Fresh powder exhibited the best mechanical properties and lowest porosity while stored and reused powder exhibited inferior mechanical properties and higher porosity. The microstructure of stress relieved samples was fine and exhibited (001) texture in the z-build direction. Microstructure for hot isostatic pressed samples was coarsened with fainter (001) texture. To investigate surface and interior defects, scanning electronmore » microscopy, optical fractography, and laser scanning microscopy techniques were employed. Hot isostatic pressing eliminated internal pores and reduced the size of surface porosity associated with the selective laser melting process. Hot isostatic pressing tended to increase ductility at the expense of decreasing strength. Furthermore, scatter in ductility of hot isostatic pressed parts suggests that the presence of unclosed surface porosity facilitated fracture with crack propagation inward from the surface of the part.« less

  5. Energetic benefits and adaptations in mammalian limbs: Scale effects and selective pressures.

    PubMed

    Kilbourne, Brandon M; Hoffman, Louwrens C

    2015-06-01

    Differences in limb size and shape are fundamental to mammalian morphological diversity; however, their relevance to locomotor costs has long been subject to debate. In particular, it remains unknown if scale effects in whole limb morphology could partially underlie decreasing mass-specific locomotor costs with increasing limb length. Whole fore- and hindlimb inertial properties reflecting limb size and shape-moment of inertia (MOI), mass, mass distribution, and natural frequency-were regressed against limb length for 44 species of quadrupedal mammals. Limb mass, MOI, and center of mass position are negatively allometric, having a strong potential for lowering mass-specific locomotor costs in large terrestrial mammals. Negative allometry of limb MOI results in a 40% reduction in MOI relative to isometry's prediction for our largest sampled taxa. However, fitting regression residuals to adaptive diversification models reveals that codiversification of limb mass, limb length, and body mass likely results from selection for differing locomotor modes of running, climbing, digging, and swimming. The observed allometric scaling does not result from selection for energetically beneficial whole limb morphology with increasing size. Instead, our data suggest that it is a consequence of differing morphological adaptations and body size distributions among quadrupedal mammals, highlighting the role of differing limb functions in mammalian evolution. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.

  6. Optimum allocation for a dual-frame telephone survey.

    PubMed

    Wolter, Kirk M; Tao, Xian; Montgomery, Robert; Smith, Philip J

    2015-12-01

    Careful design of a dual-frame random digit dial (RDD) telephone survey requires selecting from among many options that have varying impacts on cost, precision, and coverage in order to obtain the best possible implementation of the study goals. One such consideration is whether to screen cell-phone households in order to interview cell-phone only (CPO) households and exclude dual-user household, or to take all interviews obtained via the cell-phone sample. We present a framework in which to consider the tradeoffs between these two options and a method to select the optimal design. We derive and discuss the optimum allocation of sample size between the two sampling frames and explore the choice of optimum p , the mixing parameter for the dual-user domain. We illustrate our methods using the National Immunization Survey , sponsored by the Centers for Disease Control and Prevention.

  7. Additive genetic variation in the craniofacial skeleton of baboons (genus Papio) and its relationship to body and cranial size.

    PubMed

    Joganic, Jessica L; Willmore, Katherine E; Richtsmeier, Joan T; Weiss, Kenneth M; Mahaney, Michael C; Rogers, Jeffrey; Cheverud, James M

    2018-02-01

    Determining the genetic architecture of quantitative traits and genetic correlations among them is important for understanding morphological evolution patterns. We address two questions regarding papionin evolution: (1) what effect do body and cranial size, age, and sex have on phenotypic (V P ) and additive genetic (V A ) variation in baboon crania, and (2) how might additive genetic correlations between craniofacial traits and body mass affect morphological evolution? We use a large captive pedigreed baboon sample to estimate quantitative genetic parameters for craniofacial dimensions (EIDs). Our models include nested combinations of the covariates listed above. We also simulate the correlated response of a given EID due to selection on body mass alone. Covariates account for 1.2-91% of craniofacial V P . EID V A decreases across models as more covariates are included. The median genetic correlation estimate between each EID and body mass is 0.33. Analysis of the multivariate response to selection reveals that observed patterns of craniofacial variation in extant baboons cannot be attributed solely to correlated response to selection on body mass, particularly in males. Because a relatively large proportion of EID V A is shared with body mass variation, different methods of correcting for allometry by statistically controlling for size can alter residual V P patterns. This may conflate direct selection effects on craniofacial variation with those resulting from a correlated response to body mass selection. This shared genetic variation may partially explain how selection for increased body mass in two different papionin lineages produced remarkably similar craniofacial phenotypes. © 2017 Wiley Periodicals, Inc.

  8. Variability in size-selective mortality obscures the importance of larval traits to recruitment success in a temperate marine fish.

    PubMed

    Murphy, Hannah M; Warren-Myers, Fletcher W; Jenkins, Gregory P; Hamer, Paul A; Swearer, Stephen E

    2014-08-01

    In fishes, the growth-mortality hypothesis has received broad acceptance as a driver of recruitment variability. Recruitment is likely to be lower in years when the risk of starvation and predation in the larval stage is greater, leading to higher mortality. Juvenile snapper, Pagrus auratus (Sparidae), experience high recruitment variation in Port Phillip Bay, Australia. Using a 5-year (2005, 2007, 2008, 2010, 2011) data set of larval and juvenile snapper abundances and their daily growth histories, based on otolith microstructure, we found selective mortality acted on larval size at 5 days post-hatch in 4 low and average recruitment years. The highest recruitment year (2005) was characterised by no size-selective mortality. Larval growth of the initial larval population was related to recruitment, but larval growth of the juveniles was not. Selective mortality may have obscured the relationship between larval traits of the juveniles and recruitment as fast-growing and large larvae preferentially survived in lower recruitment years and fast growth was ubiquitous in high recruitment years. An index of daily mortality within and among 3 years (2007, 2008, 2010), where zooplankton were concurrently sampled with ichthyoplankton, was related to per capita availability of preferred larval prey, providing support for the match-mismatch hypothesis. In 2010, periods of low daily mortality resulted in no selective mortality. Thus both intra- and inter-annual variability in the magnitude and occurrence of selective mortality in species with complex life cycles can obscure relationships between larval traits and population replenishment, leading to underestimation of their importance in recruitment studies.

  9. Lithologic, natural-gamma, grain-size, and well-construction data for Wright-Patterson Air Force Base, Ohio

    USGS Publications Warehouse

    Dumouchelle, D.H.; De Roche, Jeffrey T.

    1991-01-01

    Wright-Patterson Air Force Base, in southwestern Ohio, overlies a buried-valley aquifer. The U.S. Geological Survey installed 35 observation wells at 13 sites on the base from fall 1988 through spring 1990. Fourteen of the wells were completed in bedrock; the remaining wells were completed in unconsolidated sediments. Split-spoon and bedrock cores were collected from all of the bedrock wells. Shelby-tube samples were collected from four wells. The wells were drilled by either the cable-tool or rotary method. Data presented in this report include lithologic and natural-gamma logs, and, for selected sediment samples, grain-size distributions of permeability. Final well-construction details, such as the total depth of well, screened interval, and grouting details, also are presented.

  10. Gear and seasonal bias associated with abundance and size structure estimates for lentic freshwater fishes

    USGS Publications Warehouse

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    All freshwater fish sampling methods are biased toward particular species, sizes, and sexes and are further influenced by season, habitat, and fish behavior changes over time. However, little is known about gear-specific biases for many common fish species because few multiple-gear comparison studies exist that have incorporated seasonal dynamics. We sampled six lakes and impoundments representing a diversity of trophic and physical conditions in Iowa, USA, using multiple gear types (i.e., standard modified fyke net, mini-modified fyke net, sinking experimental gill net, bag seine, benthic trawl, boat-mounted electrofisher used diurnally and nocturnally) to determine the influence of sampling methodology and season on fisheries assessments. Specifically, we describe the influence of season on catch per unit effort, proportional size distribution, and the number of samples required to obtain 125 stock-length individuals for 12 species of recreational and ecological importance. Mean catch per unit effort generally peaked in the spring and fall as a result of increased sampling effectiveness in shallow areas and seasonal changes in habitat use (e.g., movement offshore during summer). Mean proportional size distribution decreased from spring to fall for white bass Morone chrysops, largemouth bass Micropterus salmoides, bluegill Lepomis macrochirus, and black crappie Pomoxis nigromaculatus, suggesting selectivity for large and presumably sexually mature individuals in the spring and summer. Overall, the mean number of samples required to sample 125 stock-length individuals was minimized in the fall with sinking experimental gill nets, a boat-mounted electrofisher used at night, and standard modified nets for 11 of the 12 species evaluated. Our results provide fisheries scientists with relative comparisons between several recommended standard sampling methods and illustrate the effects of seasonal variation on estimates of population indices that will be critical to the future development of standardized sampling methods for freshwater fish in lentic ecosystems.

  11. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  12. Synthesis of mesoscale, crumpled, reduced graphene oxide roses by water-in-oil emulsion approach

    NASA Astrophysics Data System (ADS)

    Sharma, Shruti; Pham, Viet H.; Boscoboinik, Jorge A.; Camino, Fernando; Dickerson, James H.; Tannenbaum, Rina

    2018-05-01

    Mesoscale crumpled graphene oxide roses (GO roses) were synthesized by using colloidal graphene oxide (GO) variants as precursors for a hybrid emulsification-rapid evaporation approach. This process produced rose-like, spherical, reduced mesostructures of colloidal GO sheets, with corrugated surfaces and particle sizes tunable in the range of ∼800 nm to 15 μm. Excellent reproducibility for particle size distribution is shown for each selected speed of homogenizer rotor among different sample batches. The morphology and chemical structure of these produced GO roses was investigated using electron microscopy and spectroscopy techniques. The proposed synthesis route provides control over particle size, morphology and chemical properties of the synthesized GO roses.

  13. Performance analysis of the toroidal field ITER production conductors

    NASA Astrophysics Data System (ADS)

    Breschi, M.; Macioce, D.; Devred, A.

    2017-05-01

    The production of the superconducting cables for the toroidal field (TF) magnets of the ITER machine has recently been completed at the manufacturing companies selected during the previous qualification phase. The quality assurance/quality control programs that have been implemented to ensure production uniformity across numerous suppliers include performance tests of several conductor samples from selected unit lengths. The short full-size samples (4 m long) were subjected to DC and AC tests in the SULTAN facility at CRPP in Villigen, Switzerland. In a previous work the results of the tests of the conductor performance qualification samples were reported. This work reports the analyses of the results of the tests of the production conductor samples. The results reported here concern the values of current sharing temperature, critical current, effective strain and n-value from the DC tests and the energy dissipated per cycle from the AC loss tests. A detailed comparison is also presented between the performance of the conductors and that of their constituting strands.

  14. Ion concentration in micro and nanoscale electrospray emitters.

    PubMed

    Yuill, Elizabeth M; Baker, Lane A

    2018-06-01

    Solution-phase ion transport during electrospray has been characterized for nanopipettes, or glass capillaries pulled to nanoscale tip dimensions, and micron-sized electrospray ionization emitters. Direct visualization of charged fluorophores during the electrospray process is used to evaluate impacts of emitter size, ionic strength, analyte size, and pressure-driven flow on heterogeneous ion transport during electrospray. Mass spectrometric measurements of positively- and negatively-charged proteins were taken for micron-sized and nanopipette emitters under low ionic strength conditions to further illustrate a discrepancy in solution-driven transport of charged analytes. A fundamental understanding of analyte electromigration during electrospray, which is not always considered, is expected to provide control over selective analyte depletion and enrichment, and can be harnessed for sample cleanup. Graphical abstract Fluorescence micrographs of ion migration in nanoscale pipettes while solution is electrosprayed.

  15. Point of data saturation was assessed using resampling methods in a survey with open-ended questions.

    PubMed

    Tran, Viet-Thi; Porcher, Raphael; Falissard, Bruno; Ravaud, Philippe

    2016-12-01

    To describe methods to determine sample sizes in surveys using open-ended questions and to assess how resampling methods can be used to determine data saturation in these surveys. We searched the literature for surveys with open-ended questions and assessed the methods used to determine sample size in 100 studies selected at random. Then, we used Monte Carlo simulations on data from a previous study on the burden of treatment to assess the probability of identifying new themes as a function of the number of patients recruited. In the literature, 85% of researchers used a convenience sample, with a median size of 167 participants (interquartile range [IQR] = 69-406). In our simulation study, the probability of identifying at least one new theme for the next included subject was 32%, 24%, and 12% after the inclusion of 30, 50, and 100 subjects, respectively. The inclusion of 150 participants at random resulted in the identification of 92% themes (IQR = 91-93%) identified in the original study. In our study, data saturation was most certainly reached for samples >150 participants. Our method may be used to determine when to continue the study to find new themes or stop because of futility. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Treated and untreated rock dust: Quartz content and physical characterization.

    PubMed

    Soo, Jhy-Charm; Lee, Taekhee; Chisholm, William P; Farcas, Daniel; Schwegler-Berry, Diane; Harper, Martin

    2016-11-01

    Rock dusting is used to prevent secondary explosions in coal mines, but inhalation of rock dusts can be hazardous if the crystalline silica (e.g., quartz) content in the respirable fraction is high. The objective of this study is to assess the quartz content and physical characteristics of four selected rock dusts, consisting of limestone or marble in both treated (such as treatment with stearic acid or stearates) and untreated forms. Four selected rock dusts (an untreated and treated limestone and an untreated and treated marble) were aerosolized in an aerosol chamber. Respirable size-selective sampling was conducted along with particle size-segregated sampling using a Micro-Orifice Uniform Deposit Impactor. Fourier Transform Infrared spectroscopy and scanning electron microscopy with energy-dispersive X-ray (SEM-EDX) analyses were used to determine quartz mass and particle morphology, respectively. Quartz percentage in the respirable dust fraction of untreated and treated forms of the limestone dust was significantly higher than in bulk samples, but since the bulk percentage was low the enrichment factor would not have resulted in any major change to conclusions regarding the contribution of respirable rock dust to the overall airborne quartz concentration. The quartz percentage in the marble dust (untreated and treated) was very low and the respirable fractions showed no enrichment. The spectra from SEM-EDX analysis for all materials were predominantly from calcium carbonate, clay, and gypsum particles. No free quartz particles were observed. The four rock dusts used in this study are representative of those presented for use in rock dusting, but the conclusions may not be applicable to all available materials.

  17. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  18. FAST: Size-Selective, Clog-Free Isolation of Rare Cancer Cells from Whole Blood at a Liquid-Liquid Interface.

    PubMed

    Kim, Tae-Hyeong; Lim, Minji; Park, Juhee; Oh, Jung Min; Kim, Hyeongeun; Jeong, Hyunjin; Lee, Sun Ju; Park, Hee Chul; Jung, Sungmok; Kim, Byung Chul; Lee, Kyusang; Kim, Mi-Hyun; Park, Do Youn; Kim, Gwang Ha; Cho, Yoon-Kyoung

    2017-01-17

    Circulating tumor cells (CTCs) have great potential to provide minimally invasive ways for the early detection of cancer metastasis and for the response monitoring of various cancer treatments. Despite the clinical importance and progress of CTC-based cancer diagnostics, most of the current methods of enriching CTCs are difficult to implement in general hospital settings due to complex and time-consuming protocols. Among existing technologies, size-based isolation methods provide antibody-independent, relatively simple, and high throughput protocols. However, the clogging issues and lower than desired recovery rates and purity are the key challenges. In this work, inspired by antifouling membranes with liquid-filled pores in nature, clog-free, highly sensitive (95.9 ± 3.1% recovery rate), selective (>2.5 log depletion of white blood cells), rapid (>3 mL/min), and label-free isolation of viable CTCs from whole blood without prior sample treatment is achieved using a stand-alone lab-on-a-disc system equipped with fluid-assisted separation technology (FAST). Numerical simulation and experiments show that this method provides uniform, clog-free, ultrafast cell enrichment with pressure drops much less than in conventional size-based filtration, at 1 kPa. We demonstrate the clinical utility of the point-of-care detection of CTCs with samples taken from 142 patients suffering from breast, stomach, or lung cancer.

  19. Resolving Contradictions of Predictive Validity of University Matriculation Examinations in Nigeria: A Meta-Analysis Approach

    ERIC Educational Resources Information Center

    Modupe, Ale Veronica; Babafemi, Kolawole Emmanuel

    2015-01-01

    The study examined the various means of solving contradictions of predictive studies of University Matriculation Examination in Nigeria. The study used a sample size of 35 studies on predictive validity of University Matriculation Examination in Nigeria, which was purposively selected to have met the criteria for meta-analysis. Two null hypotheses…

  20. Exploring In-Service Teachers' Self-Efficacy in the Kindergarten Classrooms in Ghana

    ERIC Educational Resources Information Center

    Boateng, Philip; Sekyere, Frank Owusu

    2018-01-01

    The study explored in-service teachers' efficacy beliefs in pupil engagement. The sample size was 299 kindergarten teachers selected from both public and private kindergarten schools in the Kumasi metropolis of Ghana. The study adopted and used pupil engagement subscale of the Ohio State Teacher Efficacy Scale (OSTES) developed by Tschannen-Moran…

  1. The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Choi, In-Hee; Paek, Insu; Cho, Sun-Joo

    2017-01-01

    The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…

  2. Emotional Issues and Peer Relations in Gifted Elementary Students: Regression Analysis of National Data

    ERIC Educational Resources Information Center

    Wiley, Kristofor R.

    2013-01-01

    Many of the social and emotional needs that have historically been associated with gifted students have been questioned on the basis of recent empirical evidence. Research on the topic, however, is often limited by sample size, selection bias, or definition. This study addressed these limitations by applying linear regression methodology to data…

  3. Alternative Views of the Solar System among Turkish Students

    ERIC Educational Resources Information Center

    Cin, Mustafa

    2007-01-01

    This study examines middle-school students' alternative frameworks of the earth's shape, its relative size and its distance from the sun and the moon. The sample was selected in the province of Giresun in Turkey. Sixty-five 14-year-old students participated in the research. A structured interview consisting of open-ended questions was employed to…

  4. The Probability of Obtaining Two Statistically Different Test Scores as a Test Index

    ERIC Educational Resources Information Center

    Muller, Jorg M.

    2006-01-01

    A new test index is defined as the probability of obtaining two randomly selected test scores (PDTS) as statistically different. After giving a concept definition of the test index, two simulation studies are presented. The first analyzes the influence of the distribution of test scores, test reliability, and sample size on PDTS within classical…

  5. A model for estimating understory vegetation response to fertilization and precipitation in loblolly pine plantations

    Treesearch

    Curtis L. VanderSchaaf; Ryan W. McKnight; Thomas R. Fox; H. Lee Allen

    2010-01-01

    A model form is presented, where the model contains regressors selected for inclusion based on biological rationale, to predict how fertilization, precipitation amounts, and overstory stand density affect understory vegetation biomass. Due to time, economic, and logistic constraints, datasets of large sample sizes generally do not exist for understory vegetation. Thus...

  6. Reform-Based-Instructional Method and Learning Styles on Students' Achievement and Retention in Mathematics: Administrative Implications

    ERIC Educational Resources Information Center

    Modebelu, M. N.; Ogbonna, C. C.

    2014-01-01

    This study aimed at determining the effect of reform-based-instructional method learning styles on students' achievement and retention in mathematics. A sample size of 119 students was randomly selected. The quasiexperimental design comprising pre-test, post-test, and randomized control group were employed. The Collin Rose learning styles…

  7. Shape variation in the human pelvis and limb skeleton: Implications for obstetric adaptation.

    PubMed

    Kurki, Helen K; Decrausaz, Sarah-Louise

    2016-04-01

    Under the obstetrical dilemma (OD) hypothesis, selection acts on the human female pelvis to ensure a sufficiently sized obstetric canal for birthing a large-brained, broad shouldered neonate, while bipedal locomotion selects for a narrower and smaller pelvis. Despite this female-specific stabilizing selection, variability of linear dimensions of the pelvic canal and overall size are not reduced in females, suggesting shape may instead be variable among females of a population. Female canal shape has been shown to vary among populations, while male canal shape does not. Within this context, we examine within-population canal shape variation in comparison with that of noncanal aspects of the pelvis and the limbs. Nine skeletal samples (total female n = 101, male n = 117) representing diverse body sizes and shapes were included. Principal components analysis was applied to size-adjusted variables of each skeletal region. A multivariate variance was calculated using the weighted PC scores for all components in each model and F-ratios used to assess differences in within-population variances between sexes and skeletal regions. Within both sexes, multivariate canal shape variance is significantly greater than noncanal pelvis and limb variances, while limb variance is greater than noncanal pelvis variance in some populations. Multivariate shape variation is not consistently different between the sexes in any of the skeletal regions. Diverse selective pressures, including obstetrics, locomotion, load carrying, and others may act on canal shape, as well as genetic drift and plasticity, thus increasing variation in morphospace while protecting obstetric sufficiency. © 2015 Wiley Periodicals, Inc.

  8. Extracting samples of high diversity from thematic collections of large gene banks using a genetic-distance based approach

    PubMed Central

    2010-01-01

    Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152

  9. The use of group sequential, information-based sample size re-estimation in the design of the PRIMO study of chronic kidney disease.

    PubMed

    Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi

    2011-04-01

    Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.

  10. Mercury in fish and macroinvertebrates from New York's streams and rivers: A compendium of data sources

    USGS Publications Warehouse

    Riva-Murray, Karen; Burns, Douglas A.

    2016-01-01

    The U.S. Geological Survey has compiled a list of existing data sets, from selected sources, containing mercury (Hg) concentration data in fish and macroinvertebrate samples that were collected from flowing waters of New York State from 1970 through 2014. Data sets selected for inclusion in this report were limited to those that contain fish and (or) macroinvertebrate data that were collected across broad areas, cover relatively long time periods, and (or) were collected as part of a broader-scale (e.g. national) study or program. In addition, all data sets listed were collected, processed, and analyzed with documented methods, and contain critical sample information (e.g. fish species, fish size, Hg species) that is needed to analyze and interpret the reported Hg concentration data. Fourteen data sets, all from state or federal agencies, are listed in this report, along with selected descriptive information regarding each data source and data set contents. Together, these 14 data sets contain Hg and related data for more than 7,000 biological samples collected from more than 700 unique stream and river locations between 1970 and 2014.

  11. Do Evidence-Based Youth Psychotherapies Outperform Usual Clinical Care? A Multilevel Meta-Analysis

    PubMed Central

    Weisz, John R.; Kuppens, Sofie; Eckshtain, Dikla; Ugueto, Ana M.; Hawley, Kristin M.; Jensen-Doss, Amanda

    2013-01-01

    Context Research across four decades has produced numerous empirically-tested evidence-based psychotherapies (EBPs) for youth psychopathology, developed to improve upon usual clinical interventions. Advocates argue that these should replace usual care; but do the EBPs produce better outcomes than usual care? Objective This question was addressed in a meta-analysis of 52 randomized trials directly comparing EBPs to usual care. Analyses assessed the overall effect of EBPs vs. usual care, and candidate moderators; multilevel analysis was used to address the dependency among effect sizes that is common but typically unaddressed in psychotherapy syntheses. Data Sources The PubMed, PsychINFO, and Dissertation Abstracts International databases were searched for studies from January 1, 1960 – December 31, 2010. Study Selection 507 randomized youth psychotherapy trials were identified. Of these, the 52 studies that compared EBPs to usual care were included in the meta-analysis. Data Extraction Sixteen variables (participant, treatment, and study characteristics) were extracted from each study, and effect sizes were calculated for all EBP versus usual care comparisons. Data Synthesis EBPs outperformed usual care. Mean effect size was 0.29; the probability was 58% that a randomly selected youth receiving an EBP would be better off after treatment than a randomly selected youth receiving usual care. Three variables moderated treatment benefit: Effect sizes decreased for studies conducted outside North America, for studies in which all participants were impaired enough to qualify for diagnoses, and for outcomes reported by people other than the youths and parents in therapy. For certain key groups (e.g., studies using clinically referred samples and diagnosed samples), significant EBP effects were not demonstrated. Conclusions EBPs outperformed usual care, but the EBP advantage was modest and moderated by youth, location, and assessment characteristics. There is room for improvement in EBPs, both in the magnitude and range of their benefit, relative to usual care. PMID:23754332

  12. Measuring selected PPCPs in wastewater to estimate the population in different cities in China.

    PubMed

    Gao, Jianfa; O'Brien, Jake; Du, Peng; Li, Xiqing; Ort, Christoph; Mueller, Jochen F; Thai, Phong K

    2016-10-15

    Sampling and analysis of wastewater from municipal wastewater treatment plants (WWTPs) has become a useful tool for understanding exposure to chemicals. Both wastewater based studies and management and planning of the catchment require information on catchment population in the time of monitoring. Recently, a model has been developed and calibrated using selected pharmaceutical and personal care products (PPCPs) measured in influent wastewater for estimating population in different catchments in Australia. The present study aimed at evaluating the feasibility of utilizing this population estimation approach in China. Twenty-four hour composite influent samples were collected from 31 WWTPs in 17 cities with catchment sizes from 200,000-3,450,000 people representing all seven regions of China. The samples were analyzed for 19 PPCPs using liquid chromatography coupled to tandem mass spectrometry in direct injection mode. Eight chemicals were detected in more than 50% of the samples. Significant positive correlations were found between individual PPCP mass loads and population estimates provided by WWTP operators. Using the PPCP mass load modeling approach calibrated with WWTP operator data, we estimated the population size of each catchment with good agreement with WWTP operator values (between 50-200% for all sites and 75-125% for 23 of the 31 sites). Overall, despite much lower detection and relatively high heterogeneity in PPCP consumption across China the model provided a good estimate of the population contributing to a given wastewater sample. Wastewater analysis could also provide objective PPCP consumption status in China. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Allometry and Ecology of the Bilaterian Gut Microbiome

    PubMed Central

    Sherrill-Mix, Scott; McCormick, Kevin; Lauder, Abigail; Bailey, Aubrey; Zimmerman, Laurie; Li, Yingying; Django, Jean-Bosco N.; Bertolani, Paco; Colin, Christelle; Hart, John A.; Hart, Terese B.; Georgiev, Alexander V.; Sanz, Crickette M.; Morgan, David B.; Atencia, Rebeca; Cox, Debby; Muller, Martin N.; Sommer, Volker; Piel, Alexander K.; Stewart, Fiona A.; Speede, Sheri; Roman, Joe; Wu, Gary; Taylor, Josh; Bohm, Rudolf; Rose, Heather M.; Carlson, John; Mjungu, Deus; Schmidt, Paul; Gaughan, Celeste; Bushman, Joyslin I.; Schmidt, Ella; Bittinger, Kyle; Collman, Ronald G.; Hahn, Beatrice H.

    2018-01-01

    ABSTRACT Classical ecology provides principles for construction and function of biological communities, but to what extent these apply to the animal-associated microbiota is just beginning to be assessed. Here, we investigated the influence of several well-known ecological principles on animal-associated microbiota by characterizing gut microbial specimens from bilaterally symmetrical animals (Bilateria) ranging from flies to whales. A rigorously vetted sample set containing 265 specimens from 64 species was assembled. Bacterial lineages were characterized by 16S rRNA gene sequencing. Previously published samples were also compared, allowing analysis of over 1,098 samples in total. A restricted number of bacterial phyla was found to account for the great majority of gut colonists. Gut microbial composition was associated with host phylogeny and diet. We identified numerous gut bacterial 16S rRNA gene sequences that diverged deeply from previously studied taxa, identifying opportunities to discover new bacterial types. The number of bacterial lineages per gut sample was positively associated with animal mass, paralleling known species-area relationships from island biogeography and implicating body size as a determinant of community stability and niche complexity. Samples from larger animals harbored greater numbers of anaerobic communities, specifying a mechanism for generating more-complex microbial environments. Predictions for species/abundance relationships from models of neutral colonization did not match the data set, pointing to alternative mechanisms such as selection of specific colonists by environmental niche. Taken together, the data suggest that niche complexity increases with gut size and that niche selection forces dominate gut community construction. PMID:29588401

  14. Novel hybrid cryo‐radial method: an emerging alternative to CT‐guided biopsy in suspected lung cancer. A prospective case series and description of technique

    PubMed Central

    Yap, Elaine

    2017-01-01

    In diagnosing peripheral pulmonary lesions (PPL), radial endobronchial ultrasound (R‐EBUS) is emerging as a safer method in comparison to CT‐guided biopsy. Despite the better safety profile, the yield of R‐EBUS remains lower (73%) than CT‐guided biopsy (90%) due to the smaller size of samples. We adopted a hybrid method by adding cryobiopsy via the R‐EBUS Guide Sheath (GS) to produce larger, non‐crushed samples to improve diagnostic capability and enhance molecular testing. We report six prospective patients who underwent this procedure in our institution. R‐EBUS samples were obtained via conventional sampling methods (needle aspiration, forceps biopsy, and cytology brush), followed by a cryobiopsy. An endobronchial blocker was placed near the planned area of biopsy in advance and inflated post‐biopsy to minimize the risk of bleeding in all patients. A chest X‐ray was performed 1 h post‐procedure. All the PPLs were visualized with R‐EBUS. The mean diameter of cryobiopsy samples was twice the size of forceps biopsy samples. In four patients, cryobiopsy samples were superior in size and the number of malignant cells per high power filed and was the preferred sample selected for mutation analysis and molecular testing. There was no pneumothorax or significant bleeding to report. Cryobiopsy samples were consistently larger and were the preferred samples for molecular testing, with an increase in the diagnostic yield and reduction in the need for repeat procedures, without hindering the marked safety profile of R‐EBUS. Using an endobronchial blocker improves the safety of this procedure. PMID:29321931

  15. Evaluation of Bias-Variance Trade-Off for Commonly Used Post-Summarizing Normalization Procedures in Large-Scale Gene Expression Studies

    PubMed Central

    Qiu, Xing; Hu, Rui; Wu, Zhixin

    2014-01-01

    Normalization procedures are widely used in high-throughput genomic data analyses to remove various technological noise and variations. They are known to have profound impact to the subsequent gene differential expression analysis. Although there has been some research in evaluating different normalization procedures, few attempts have been made to systematically evaluate the gene detection performances of normalization procedures from the bias-variance trade-off point of view, especially with strong gene differentiation effects and large sample size. In this paper, we conduct a thorough study to evaluate the effects of normalization procedures combined with several commonly used statistical tests and MTPs under different configurations of effect size and sample size. We conduct theoretical evaluation based on a random effect model, as well as simulation and biological data analyses to verify the results. Based on our findings, we provide some practical guidance for selecting a suitable normalization procedure under different scenarios. PMID:24941114

  16. Determination of hydrogen abundance in selected lunar soils

    NASA Technical Reports Server (NTRS)

    Bustin, Roberta

    1987-01-01

    Hydrogen was implanted in lunar soil through solar wind activity. In order to determine the feasibility of utilizing this solar wind hydrogen, it is necessary to know not only hydrogen abundances in bulk soils from a variety of locations but also the distribution of hydrogen within a given soil. Hydrogen distribution in bulk soils, grain size separates, mineral types, and core samples was investigated. Hydrogen was found in all samples studied. The amount varied considerably, depending on soil maturity, mineral types present, grain size distribution, and depth. Hydrogen implantation is definitely a surface phenomenon. However, as constructional particles are formed, previously exposed surfaces become embedded within particles, causing an enrichment of hydrogen in these species. In view of possibly extracting the hydrogen for use on the lunar surface, it is encouraging to know that hydrogen is present to a considerable depth and not only in the upper few millimeters. Based on these preliminary studies, extraction of solar wind hydrogen from lunar soil appears feasible, particulary if some kind of grain size separation is possible.

  17. Measuring solids concentration in stormwater runoff: comparison of analytical methods.

    PubMed

    Clark, Shirley E; Siu, Christina Y S

    2008-01-15

    Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.

  18. Synthesis, characterization and application of ion imprinted polymeric nanobeads for highly selective preconcentration and spectrophotometric determination of Ni2 + ion in water samples

    NASA Astrophysics Data System (ADS)

    Rajabi, Hamid Reza; Razmpour, Saham

    2016-01-01

    Here, the researchers report on the synthesis of ion imprinted polymeric (IIP) nanoparticles using a thermal polymerization strategy, and their usage for the separation of Ni2 + ion from water samples. The prepared Ni-IIP was characterized by colorimetry, FT-IR spectroscopy, and scanning electron microscopy. It was found that the particle size of the prepared particle to be 50-70 nm in diameter with the highly selective binding capability for Ni2 + ion, with reasonable adsorption and desorption process. After preconcentration, bound ions can be eluted with an aqueous solution of hydrochloric acid, after their complexation with dimethylglyoxime, these ions can be quantified by UV-Vis absorption spectrophotometry. The effect of various parameters on the extraction efficiency including pH of sample solution, adsorption and leaching times, initial sample volume, concentration and volume of eluent were investigated. In selectivity study, it was found that imprinting causes increased affinity of the prepared IIP toward Ni2 + ion over other ions such as Na+, K+, Ag+, Co2 +, Cu2 +, Cd2 +, Hg2 +, Pb2 +, Zn2 +, Mn2 +, Mg2 +, Cr3 +, and Fe3 +. The prepared IIP can be used and regenerated for at least eight times without any significant decrease in binding affinities. The prepared IIP is considered to be promising and selective sorbent for solid-phase extraction and preconcentration of Ni2 + ion from different water samples.

  19. Suspended sediments from upstream tributaries as the source of downstream river sites

    NASA Astrophysics Data System (ADS)

    Haddadchi, Arman; Olley, Jon

    2014-05-01

    Understanding the efficiency with which sediment eroded from different sources is transported to the catchment outlet is a key knowledge gap that is critical to our ability to accurately target and prioritise management actions to reduce sediment delivery. Sediment fingerprinting has proven to be an efficient approach to determine the sources of sediment. This study examines the suspended sediment sources from Emu Creek catchment, south eastern Queensland, Australia. In addition to collect suspended sediments from different sites of the streams after the confluence of tributaries and outlet of the catchment, time integrated suspended samples from upper tributaries were used as the source of sediment, instead of using hillslope and channel bank samples. Totally, 35 time-integrated samplers were used to compute the contribution of suspended sediments from different upstream waterways to the downstream sediment sites. Three size fractions of materials including fine sand (63-210 μm), silt (10-63 μm) and fine silt and clay (<10 μm) were used to find the effect of particle size on the contribution of upper sediments as the sources of sediment after river confluences. And then samples were analysed by ICP-MS and -OES to find 41 sediment fingerprints. According to the results of Student's T-distribution mixing model, small creeks in the middle and lower part of the catchment were major source in different size fractions, especially in silt (10-63 μm) samples. Gowrie Creek as covers southern-upstream part of the catchment was a major contributor at the outlet of the catchment in finest size fraction (<10 μm) Large differences between the contributions of suspended sediments from upper tributaries in different size fractions necessitate the selection of appropriate size fraction on sediment tracing in the catchment and also major effect of particle size on the movement and deposition of sediments.

  20. EPA Region 1 - Valley Depth in Meters

    EPA Pesticide Factsheets

    Raster of the Depth in meters of EPA-delimited Valleys in Region 1.Valleys (areas that are lower than their neighbors) were extracted from a Digital Elevation Model (USGS, 30m) by finding the local average elevation, subtracting the actual elevation from the average, and selecting areas where the actual elevation was below the average. The landscape was sampled at seven scales (circles of 1, 2, 4, 7, 11, 16, and 22 km radius) to take into account the diversity of valley shapes and sizes. Areas selected in at least four scales were designated as valleys.

  1. A similarity based learning framework for interim analysis of outcome prediction of acupuncture for neck pain.

    PubMed

    Zhang, Gang; Liang, Zhaohui; Yin, Jian; Fu, Wenbin; Li, Guo-Zheng

    2013-01-01

    Chronic neck pain is a common morbid disorder in modern society. Acupuncture has been administered for treating chronic pain as an alternative therapy for a long time, with its effectiveness supported by the latest clinical evidence. However, the potential effective difference in different syndrome types is questioned due to the limits of sample size and statistical methods. We applied machine learning methods in an attempt to solve this problem. Through a multi-objective sorting of subjective measurements, outstanding samples are selected to form the base of our kernel-oriented model. With calculation of similarities between the concerned sample and base samples, we are able to make full use of information contained in the known samples, which is especially effective in the case of a small sample set. To tackle the parameters selection problem in similarity learning, we propose an ensemble version of slightly different parameter setting to obtain stronger learning. The experimental result on a real data set shows that compared to some previous well-known methods, the proposed algorithm is capable of discovering the underlying difference among different syndrome types and is feasible for predicting the effective tendency in clinical trials of large samples.

  2. Structural differences in reciprocal translocations. Potential for a model of risk in Rcp.

    PubMed

    Daniel, A

    1979-10-01

    Interchange segment sizes and the sizes of chromosome imbalance arising from the different modes of meiotic segregation were measured in a selected sample of 20 reciprocal translocations (Rep). The Rep were selected by two modes of ascertainment: (I) neonates with an unbalanced form of the translocation, and (II) couples with recurrent spontaneous abortions without evidence of full-term translocation aneuploid offspring. The measurements (% of haploid autosomal length: %HAL) were plotted as the observed or potential chromosomal imbalance with monosomy (abscissa) and trisomy (ordinate). It was found that (a) the interchange segments were larger in the spontaneous abortion Rcp, (b) that all of the imbalances observed in full-term neonates plotted close to the origin and to the left of the line joining 4% trisomy to 2% monosomy, and (c) the imbalances observed in the neonates in each individual Rcp were of the smallest size possible arising by any segregation mode. It was concluded that a major factor in the survival to term of aneuploid conceptuses is the size (proportion of genome) of the chromosome abnormality, irrespective of the origin of the chromosome regions. These results are discussed in relation to their use as a model to evaluate the risk of abnormal offspring in the progeny of translocation heterozygotes (the Chromosome Imbalance Size-Viability Model).

  3. Ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction

    PubMed Central

    Friedman, Matt

    2009-01-01

    Despite the attention focused on mass extinction events in the fossil record, patterns of extinction in the dominant group of marine vertebrates—fishes—remain largely unexplored. Here, I demonstrate ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction, based on a genus-level dataset that accounts for lineages predicted on the basis of phylogeny but not yet sampled in the fossil record. Two ecologically relevant anatomical features are considered: body size and jaw-closing lever ratio. Extinction intensity is higher for taxa with large body sizes and jaws consistent with speed (rather than force) transmission; resampling tests indicate that victims represent a nonrandom subset of taxa present in the final stage of the Cretaceous. Logistic regressions of the raw data reveal that this nonrandom distribution stems primarily from the larger body sizes of victims relative to survivors. Jaw mechanics are also a significant factor for most dataset partitions but are always less important than body size. When data are corrected for phylogenetic nonindependence, jaw mechanics show a significant correlation with extinction risk, but body size does not. Many modern large-bodied, predatory taxa currently suffering from overexploitation, such billfishes and tunas, first occur in the Paleocene, when they appear to have filled the functional space vacated by some extinction victims. PMID:19276106

  4. Ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction.

    PubMed

    Friedman, Matt

    2009-03-31

    Despite the attention focused on mass extinction events in the fossil record, patterns of extinction in the dominant group of marine vertebrates-fishes-remain largely unexplored. Here, I demonstrate ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction, based on a genus-level dataset that accounts for lineages predicted on the basis of phylogeny but not yet sampled in the fossil record. Two ecologically relevant anatomical features are considered: body size and jaw-closing lever ratio. Extinction intensity is higher for taxa with large body sizes and jaws consistent with speed (rather than force) transmission; resampling tests indicate that victims represent a nonrandom subset of taxa present in the final stage of the Cretaceous. Logistic regressions of the raw data reveal that this nonrandom distribution stems primarily from the larger body sizes of victims relative to survivors. Jaw mechanics are also a significant factor for most dataset partitions but are always less important than body size. When data are corrected for phylogenetic nonindependence, jaw mechanics show a significant correlation with extinction risk, but body size does not. Many modern large-bodied, predatory taxa currently suffering from overexploitation, such billfishes and tunas, first occur in the Paleocene, when they appear to have filled the functional space vacated by some extinction victims.

  5. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    USGS Publications Warehouse

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  6. Selective plane illumination microscopy (SPIM) with time-domain fluorescence lifetime imaging microscopy (FLIM) for volumetric measurement of cleared mouse brain samples

    NASA Astrophysics Data System (ADS)

    Funane, Tsukasa; Hou, Steven S.; Zoltowska, Katarzyna Marta; van Veluw, Susanne J.; Berezovska, Oksana; Kumar, Anand T. N.; Bacskai, Brian J.

    2018-05-01

    We have developed an imaging technique which combines selective plane illumination microscopy with time-domain fluorescence lifetime imaging microscopy (SPIM-FLIM) for three-dimensional volumetric imaging of cleared mouse brains with micro- to mesoscopic resolution. The main features of the microscope include a wavelength-adjustable pulsed laser source (Ti:sapphire) (near-infrared) laser, a BiBO frequency-doubling photonic crystal, a liquid chamber, an electrically focus-tunable lens, a cuvette based sample holder, and an air (dry) objective lens. The performance of the system was evaluated with a lifetime reference dye and micro-bead phantom measurements. Intensity and lifetime maps of three-dimensional human embryonic kidney (HEK) cell culture samples and cleared mouse brain samples expressing green fluorescent protein (GFP) (donor only) and green and red fluorescent protein [positive Förster (fluorescence) resonance energy transfer] were acquired. The results show that the SPIM-FLIM system can be used for sample sizes ranging from single cells to whole mouse organs and can serve as a powerful tool for medical and biological research.

  7. Exact intervals and tests for median when one sample value possibly an outliner

    NASA Technical Reports Server (NTRS)

    Keller, G. J.; Walsh, J. E.

    1973-01-01

    Available are independent observations (continuous data) that are believed to be a random sample. Desired are distribution-free confidence intervals and significance tests for the population median. However, there is the possibility that either the smallest or the largest observation is an outlier. Then, use of a procedure for rejection of an outlying observation might seem appropriate. Such a procedure would consider that two alternative situations are possible and would select one of them. Either (1) the n observations are truly a random sample, or (2) an outlier exists and its removal leaves a random sample of size n-1. For either situation, confidence intervals and tests are desired for the median of the population yielding the random sample. Unfortunately, satisfactory rejection procedures of a distribution-free nature do not seem to be available. Moreover, all rejection procedures impose undesirable conditional effects on the observations, and also, can select the wrong one of the two above situations. It is found that two-sided intervals and tests based on two symmetrically located order statistics (not the largest and smallest) of the n observations have this property.

  8. Pelvic dimorphism in relation to body size and body size dimorphism in humans.

    PubMed

    Kurki, Helen K

    2011-12-01

    Many mammalian species display sexual dimorphism in the pelvis, where females possess larger dimensions of the obstetric (pelvic) canal than males. This is contrary to the general pattern of body size dimorphism, where males are larger than females. Pelvic dimorphism is often attributed to selection relating to parturition, or as a developmental consequence of secondary sexual differentiation (different allometric growth trajectories of each sex). Among anthropoid primates, species with higher body size dimorphism have higher pelvic dimorphism (in converse directions), which is consistent with an explanation of differential growth trajectories for pelvic dimorphism. This study investigates whether the pattern holds intraspecifically in humans by asking: Do human populations with high body size dimorphism also display high pelvic dimorphism? Previous research demonstrated that in some small-bodied populations, relative pelvic canal size can be larger than in large-bodied populations, while others have suggested that larger-bodied human populations display greater body size dimorphism. Eleven human skeletal samples (total N: male = 229, female = 208) were utilized, representing a range of body sizes and geographical regions. Skeletal measurements of the pelvis and femur were collected and indices of sexual dimorphism for the pelvis and femur were calculated for each sample [ln(M/F)]. Linear regression was used to examine the relationships between indices of pelvic and femoral size dimorphism, and between pelvic dimorphism and female femoral size. Contrary to expectations, the results suggest that pelvic dimorphism in humans is generally not correlated with body size dimorphism or female body size. These results indicate that divergent patterns of dimorphism exist for the pelvis and body size in humans. Implications for the evaluation of the evolution of pelvic dimorphism and rotational childbirth in Homo are considered. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Interspecific competition alters nonlinear selection on offspring size in the field.

    PubMed

    Marshall, Dustin J; Monro, Keyne

    2013-02-01

    Offspring size is one of the most important life-history traits with consequences for both the ecology and evolution of most organisms. Surprisingly, formal estimates of selection on offspring size are rare, and the degree to which selection (particularly nonlinear selection) varies among environments remains poorly explored. We estimate linear and nonlinear selection on offspring size, module size, and senescence rate for a sessile marine invertebrate in the field under three different intensities of interspecific competition. The intensity of competition strongly modified the strength and form of selection acting on offspring size. We found evidence for differences in nonlinear selection across the three environments. Our results suggest that the fitness returns of a given offspring size depend simultaneously on their environmental context, and on the context of other offspring traits. Offspring size effects can be more pervasive with regards to their influence on the fitness returns of other traits than previously recognized, and we suggest that the evolution of offspring size cannot be understood in isolation from other traits. Overall, variability in the form and strength of selection on offspring size in nature may reduce the efficacy of selection on offspring size and maintain variation in this trait. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  10. Flexible proton 3D MR spectroscopic imaging of the prostate with low-power adiabatic pulses for volume selection and spiral readout.

    PubMed

    Steinseifer, Isabell K; Philips, Bart W J; Gagoski, Borjan; Weiland, Elisabeth; Scheenen, Tom W J; Heerschap, Arend

    2017-03-01

    Cartesian k-space sampling in three-dimensional magnetic resonance spectroscopic imaging (MRSI) of the prostate limits the selection of voxel size and acquisition time. Therefore, large prostates are often scanned at reduced spatial resolutions to stay within clinically acceptable measurement times. Here we present a semilocalized adiabatic selective refocusing (sLASER) sequence with gradient-modulated offset-independent adiabatic (GOIA) refocusing pulses and spiral k-space acquisition (GOIA-sLASER-Spiral) for fast prostate MRSI with enhanced resolution and extended matrix sizes. MR was performed at 3 tesla with an endorectal receive coil. GOIA-sLASER-Spiral at an echo time (TE) of 90 ms was compared to a point-resolved spectroscopy sequence (PRESS) with weighted, elliptical phase encoding at an TE of 145 ms using simulations and measurements of phantoms and patients (n = 9). GOIA-sLASER-Spiral acquisition allows prostate MR spectra to be obtained in ∼5 min with a quality comparable to those acquired with a common Cartesian PRESS protocol in ∼9 min, or at an enhanced spatial resolution showing more precise tissue allocation of metabolites. Extended field of views (FOVs) and matrix sizes for large prostates are possible without compromising spatial resolution or measurement time. The flexibility of spiral sampling enables prostate MRSI with a wide range of resolutions and FOVs without undesirable increases in acquisition times, as in Cartesian encoding. This approach is suitable for routine clinical exams of prostate metabolites. Magn Reson Med 77:928-935, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. Clutch size declines with elevation in tropical birds

    USGS Publications Warehouse

    Boyce, A.J.; Freeman, Benjamin G.; Mitchell, Adam E.; Martin, Thomas E.

    2015-01-01

    Clutch size commonly decreases with increasing elevation among temperate-zone and subtropical songbird species. Tropical songbirds typically lay small clutches, thus the ability to evolve even smaller clutch sizes at higher elevations is unclear and untested. We conducted a comparative phylogenetic analysis using data gathered from the literature to test whether clutch size varied with elevation among forest passerines from three tropical biogeographic regions—the Venezuelan Andes and adjacent lowlands, Malaysian Borneo, and New Guinea. We found a significant negative effect of elevation on variation in clutch size among species. We found the same pattern using field data sampled across elevational gradients in Venezuela and Malaysian Borneo. Field data were not available for New Guinea. Both sets of results demonstrate that tropical montane species across disparate biogeographic realms lay smaller clutches than closely related low-elevation species. The environmental sources of selection underlying this pattern remain uncertain and merit further investigation.

  12. A longitudinal study of the relationships between the Big Five personality traits and body size perception.

    PubMed

    Hartmann, Christina; Siegrist, Michael

    2015-06-01

    The present study investigated the longitudinal development of body size perception in relation to different personality traits. A sample of Swiss adults (N=2905, 47% men), randomly selected from the telephone book, completed a questionnaire on two consecutive years (2012, 2013). Body size perception was assessed with the Contour Drawing Rating Scale and personality traits were assessed with a short version of the Big Five Inventory. Longitudinal analysis of change indicated that men and women scoring higher on conscientiousness perceived themselves as thinner one year later. In contrast, women scoring higher on neuroticism perceived their body size as larger one year later. No significant effect was observed for men scoring higher on neuroticism. These results were independent of weight changes, body mass index, age, and education. Our findings suggest that personality traits contribute to body size perception among adults. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Porosity characterization for heterogeneous shales using integrated multiscale microscopy

    NASA Astrophysics Data System (ADS)

    Rassouli, F.; Andrew, M.; Zoback, M. D.

    2016-12-01

    Pore size distribution analysis plays a critical role in gas storage capacity and fluid transport characterization of shales. Study of the diverse distribution of pore size and structure in such low permeably rocks is withheld by the lack of tools to visualize the microstructural properties of shale rocks. In this paper we try to use multiple techniques to investigate the full pore size range in different sample scales. Modern imaging techniques are combined with routine analytical investigations (x-ray diffraction, thin section analysis and mercury porosimetry) to describe pore size distribution of shale samples from Haynesville formation in East Texas to generate a more holistic understanding of the porosity structure in shales, ranging from standard core plug down to nm scales. Standard 1" diameter core plug samples were first imaged using a Versa 3D x-ray microscope at lower resolutions. Then we pick several regions of interest (ROIs) with various micro-features (such as micro-cracks and high organic matters) in the rock samples to run higher resolution CT scans using a non-destructive interior tomography scans. After this step, we cut the samples and drill 5 mm diameter cores out of the selected ROIs. Then we rescan the samples to measure porosity distribution of the 5 mm cores. We repeat this step for samples with diameter of 1 mm being cut out of the 5 mm cores using a laser cutting machine. After comparing the pore structure and distribution of the samples measured form micro-CT analysis, we move to nano-scale imaging to capture the ultra-fine pores within the shale samples. At this stage, the diameter of the 1 mm samples will be milled down to 70 microns using the laser beam. We scan these samples in a nano-CT Ultra x-ray microscope and calculate the porosity of the samples by image segmentation methods. Finally, we use images collected from focused ion beam scanning electron microscopy (FIB-SEM) to be able to compare the results of porosity measurements from all different imaging techniques. These multi-scale characterization techniques are then compared with traditional analytical techniques such as Mercury Porosimetry.

  14. In vitro and in vivo studies of biodegradable fine grained AZ31 magnesium alloy produced by equal channel angular pressing.

    PubMed

    Ratna Sunil, B; Sampath Kumar, T S; Chakkingal, Uday; Nandakumar, V; Doble, Mukesh; Devi Prasad, V; Raghunath, M

    2016-02-01

    The objective of the present work is to investigate the role of different grain sizes produced by equal channel angular pressing (ECAP) on the degradation behavior of magnesium alloy using in vitro and in vivo studies. Commercially available AZ31 magnesium alloy was selected and processed by ECAP at 300°C for up to four passes using route Bc. Grain refinement from a starting size of 46μm to a grain size distribution of 1-5μm was successfully achieved after the 4th pass. Wettability of ECAPed samples assessed by contact angle measurements was found to increase due to the fine grain structure. In vitro degradation and bioactivity of the samples studied by immersing in super saturated simulated body fluid (SBF 5×) showed rapid mineralization within 24h due to the increased wettability in fine grained AZ31 Mg alloy. Corrosion behavior of the samples assessed by weight loss and electrochemical tests conducted in SBF 5× clearly showed the prominent role of enhanced mineral deposition on ECAPed AZ31 Mg in controlling the abnormal degradation. Cytotoxicity studies by MTT colorimetric assay showed that all the samples are viable. Additionally, cell adhesion was excellent for ECAPed samples particularly for the 3rd and 4th pass samples. In vivo experiments conducted using New Zealand White rabbits clearly showed lower degradation rate for ECAPed sample compared with annealed AZ31 Mg alloy and all the samples showed biocompatibility and no health abnormalities were noticed in the animals after 60days of in vivo studies. These results suggest that the grain size plays an important role in degradation management of magnesium alloys and ECAP technique can be adopted to achieve fine grain structures for developing degradable magnesium alloys for biomedical applications. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Determination of respirable-sized crystalline silica in different ambient environments in the United Kingdom with a mobile high flow rate sampler utilising porous foams to achieve the required particle size selection

    NASA Astrophysics Data System (ADS)

    Stacey, Peter; Thorpe, Andrew; Roberts, Paul; Butler, Owen

    2018-06-01

    Inhalation of respirable crystalline silica (RCS) can cause diseases including silicosis and cancer. Levels of RCS close to an emission source are measured but little is known about the wider ambient exposure from industry emissions or natural sources. The aim of this work is to report the RCS concentrations obtained from a variety of ambient environments using a new mobile respirable (PM4) sampler. A mobile battery powered high flow rate (52 L min-1) sampler was developed and evaluated for particulate aerosol sampling employing foams to select the respirable particle size fraction. Sampling was conducted in the United Kingdom at site boundaries surrounding seven urban construction and demolition and five sand quarry sites. These are compared with data from twelve urban aerosol samples and from repeat measurements from a base line study at a single rural site. The 50% particle size penetration (d50) through the foam was 4.3 μm. Over 85% of predict bias values were with ±10% of the respirable convention, which is based on a log normal curve. Results for RCS from all construction and quarry activities are generally low with a 95 th percentile of 11 μg m-3. Eighty percent of results were less than the health benchmark value of 3 μg m-3 used in some states in America for ambient concentrations. The power cutting of brick and the largest demolition activities gave the highest construction levels. Measured urban background RCS levels were typically below 0.3 μg m-3 and the median RCS level, at a rural background location, was 0.02 μg m-3. These reported ambient RCS concentrations may provide useful baseline values to assess the wider impact of fugitive, RCS containing, dust emissions into the wider environment.

  16. A search for faint high-redshift radio galaxy candidates at 150 MHz

    NASA Astrophysics Data System (ADS)

    Saxena, A.; Jagannathan, P.; Röttgering, H. J. A.; Best, P. N.; Intema, H. T.; Zhang, M.; Duncan, K. J.; Carilli, C. L.; Miley, G. K.

    2018-04-01

    Ultrasteep spectrum (USS) radio sources are good tracers of powerful radio galaxies at z > 2. Identification of even a single bright radio galaxy at z > 6 can be used to detect redshifted 21 cm absorption due to neutral hydrogen in the intervening intergalactic medium. Here we describe a new sample of high-redshift radio galaxy (HzRG) candidates constructed from the TIFR GMRT Sky Survey First Alternative Data Release survey at 150 MHz. We employ USS selection (α ≤ -1.3) in ˜10 000 deg2, in combination with strict size selection and non-detections in all-sky optical and infrared surveys. We apply flux density cuts that probe a unique parameter space in flux density (50 mJy < S150 < 200 mJy) to build a sample of 32 HzRG candidates. Follow-up Karl G. Jansky Very Large Array (VLA) observations at 1.4 GHz with an average beam size of 1.3 arcsec revealed ˜ 48 per cent of sources to have a single radio component. P-band (370 MHz) imaging of 17 of these sources revealed a flattening radio SED for 10 sources at low frequencies, which is expected from compact HzRGs. Two of our sources lie in fields where deeper multiwavelength photometry and ancillary radio data are available and for one of these we find a best-fitting photo-z of 4.8 ± 2.0. The other source has zphot = 1.4 ± 0.1 and a small angular size (3.7 arcsec), which could be associated with an obscured star-forming galaxy or with a `dead' elliptical. One USS radio source not part of the HzRG sample but observed with the VLA none the less is revealed to be a candidate giant radio galaxy with a host galaxy photo-z of 1.8 ± 0.5, indicating a size of 875 kpc.

  17. Practical characteristics of adaptive design in phase 2 and 3 clinical trials.

    PubMed

    Sato, A; Shimura, M; Gosho, M

    2018-04-01

    Adaptive design methods are expected to be ethical, reflect real medical practice, increase the likelihood of research and development success and reduce the allocation of patients into ineffective treatment groups by the early termination of clinical trials. However, the comprehensive details regarding which types of clinical trials will include adaptive designs remain unclear. We examined the practical characteristics of adaptive design used in clinical trials. We conducted a literature search of adaptive design clinical trials published from 2012 to 2015 using PubMed, EMBASE, and the Cochrane Central Register of Controlled Trials, with common search terms related to adaptive design. We systematically assessed the types and characteristics of adaptive designs and disease areas employed in the adaptive design trials. Our survey identified 245 adaptive design clinical trials. The number of trials by the publication year increased from 2012 to 2013 and did not greatly change afterwards. The most frequently used adaptive design was group sequential design (n = 222, 90.6%), especially for neoplasm or cardiovascular disease trials. Among the other types of adaptive design, adaptive dose/treatment group selection (n = 21, 8.6%) and adaptive sample-size adjustment (n = 19, 7.8%) were frequently used. The adaptive randomization (n = 8, 3.3%) and adaptive seamless design (n = 6, 2.4%) were less frequent. Adaptive dose/treatment group selection and adaptive sample-size adjustment were frequently used (up to 23%) in "certain infectious and parasitic diseases," "diseases of nervous system," and "mental and behavioural disorders" in comparison with "neoplasms" (<6.6%). For "mental and behavioural disorders," adaptive randomization was used in two trials of eight trials in total (25%). Group sequential design and adaptive sample-size adjustment were used frequently in phase 3 trials or in trials where study phase was not specified, whereas the other types of adaptive designs were used more in phase 2 trials. Approximately 82% (202 of 245 trials) resulted in early termination at the interim analysis. Among the 202 trials, 132 (54% of 245 trials) had fewer randomized patients than initially planned. This result supports the motive to use adaptive design to make study durations shorter and include a smaller number of subjects. We found that adaptive designs have been applied to clinical trials in various therapeutic areas and interventions. The applications were frequently reported in neoplasm or cardiovascular clinical trials. The adaptive dose/treatment group selection and sample-size adjustment are increasingly common, and these adaptations generally follow the Food and Drug Administration's (FDA's) recommendations. © 2017 John Wiley & Sons Ltd.

  18. Revising traditional theory on the link between plant body size and fitness under competition: evidence from old-field vegetation

    PubMed Central

    Tracey, Amanda J; Aarssen, Lonnie W

    2014-01-01

    The selection consequences of competition in plants have been traditionally interpreted based on a “size-advantage” hypothesis – that is, under intense crowding/competition from neighbors, natural selection generally favors capacity for a relatively large plant body size. However, this conflicts with abundant data, showing that resident species body size distributions are usually strongly right-skewed at virtually all scales within vegetation. Using surveys within sample plots and a neighbor-removal experiment, we tested: (1) whether resident species that have a larger maximum potential body size (MAX) generally have more successful local individual recruitment, and thus greater local abundance/density (as predicted by the traditional size-advantage hypothesis); and (2) whether there is a general between-species trade-off relationship between MAX and capacity to produce offspring when body size is severely suppressed by crowding/competition – that is, whether resident species with a larger MAX generally also need to reach a larger minimum reproductive threshold size (MIN) before they can reproduce at all. The results showed that MIN had a positive relationship with MAX across resident species, and local density – as well as local density of just reproductive individuals – was generally greater for species with smaller MIN (and hence smaller MAX). In addition, the cleared neighborhoods of larger target species (which had relatively large MIN) generally had – in the following growing season – a lower ratio of conspecific recruitment within these neighborhoods relative to recruitment of other (i.e., smaller) species (which had generally smaller MIN). These data are consistent with an alternative hypothesis based on a ‘reproductive-economy-advantage’ – that is, superior fitness under competition in plants generally requires not larger potential body size, but rather superior capacity to recruit offspring that are in turn capable of producing grand-offspring – and hence transmitting genes to future generations – despite intense and persistent (cross-generational) crowding/competition from near neighbors. Selection for the latter is expected to favor relatively small minimum reproductive threshold size and hence – as a tradeoff – relatively small (not large) potential body size. PMID:24772274

  19. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  20. Mixed nano/micro-sized calcium phosphate composite and EDTA root surface etching improve availability of graft material in intrabony defects: an in vivo scanning electron microscopy evaluation.

    PubMed

    Gamal, Ahmed Y; Iacono, Vincent J

    2013-12-01

    The use of nanoparticles of graft materials may lead to breakthrough applications for periodontal regeneration. However, due to their small particle size, nanoparticles may be eliminated from periodontal defects by phagocytosis. In an attempt to improve nanoparticle retention in periodontal defects, the present in vivo study uses scanning electron microscopy (SEM) to evaluate the potential of micrograft particles of β-tricalcium phosphate (β-TCP) to enhance the binding and retention of nanoparticles of hydroxyapatite (nHA) on EDTA-treated and non-treated root surfaces in periodontal defects after 14 days of healing. Sixty patients having at least two hopeless periodontally affected teeth designated for extraction were randomly divided into four treatment groups (15 patients per group). Patients in group 1 had selected periodontal intrabony defects grafted with nHA of particle size 10 to 100 nm. Patients in group 2 were treated in a similar manner but had the affected roots etched for 2 minutes with a neutral 24% EDTA gel before grafting of the associated vertical defects with nHA. Patients in group 3 had the selected intrabony defects grafted with a composite graft consisting of equal volumes of nHA and β-TCP (particle size 63 to 150 nm). Patients in group 4 were treated as in group 3 but the affected roots were etched with neutral 24% EDTA as in group 2. For each of the four groups, one tooth was extracted immediately, and the second tooth was extracted after 14 days of healing for SEM evaluation. Fourteen days after surgery, all group 1 samples were devoid of any nanoparticles adherent to the root surfaces. Group 2 showed root surface areas 44.7% covered by a single layer of clot-blended grafted particles 14 days following graft application. After 14 days, group 3 samples appeared to retain fibrin strands devoid of grafted particles. Immediately extracted root samples of group 4 had adherent graft particles that covered a considerable area of the root surfaces (88.6%). Grafted particles appeared to cover all samples in a multilayered pattern. After 14 days, the group 4 extracted samples showed multilayered fibrin-covered nano/micro-sized graft particles adherent to the root surfaces (78.5%). The use of a composite graft consisting of nHA and microsized β-TCP after root surface treatment with 24% EDTA may be a suitable method to improve nHA retention in periodontal defects with subsequent graft bioreactivity.

  1. A group sequential adaptive treatment assignment design for proof of concept and dose selection in headache trials.

    PubMed

    Hall, David B; Meier, Ulrich; Diener, Hans-Cristoph

    2005-06-01

    The trial objective was to test whether a new mechanism of action would effectively treat migraine headaches and to select a dose range for further investigation. The motivation for a group sequential, adaptive, placebo-controlled trial design was (1) limited information about where across the range of seven doses to focus attention, (2) a need to limit sample size for a complicated inpatient treatment and (3) a desire to reduce exposure of patients to ineffective treatment. A design based on group sequential and up and down designs was developed and operational characteristics were explored by trial simulation. The primary outcome was headache response at 2 h after treatment. Groups of four treated and two placebo patients were assigned to one dose. Adaptive dose selection was based on response rates of 60% seen with other migraine treatments. If more than 60% of treated patients responded, then the next dose was the next lower dose; otherwise, the dose was increased. A stopping rule of at least five groups at the target dose and at least four groups at that dose with more than 60% response was developed to ensure that a selected dose would be statistically significantly (p=0.05) superior to placebo. Simulations indicated good characteristics in terms of control of type 1 error, sufficient power, modest expected sample size and modest bias in estimation. The trial design is attractive for phase 2 clinical trials when response is acute and simple, ideally binary, placebo comparator is required, and patient accrual is relatively slow allowing for the collection and processing of results as a basis for the adaptive assignment of patients to dose groups. The acute migraine trial based on this design was successful in both proof of concept and dose range selection.

  2. Comparing the efficiency of digital and conventional soil mapping to predict soil types in a semi-arid region in Iran

    NASA Astrophysics Data System (ADS)

    Zeraatpisheh, Mojtaba; Ayoubi, Shamsollah; Jafari, Azam; Finke, Peter

    2017-05-01

    The efficiency of different digital and conventional soil mapping approaches to produce categorical maps of soil types is determined by cost, sample size, accuracy and the selected taxonomic level. The efficiency of digital and conventional soil mapping approaches was examined in the semi-arid region of Borujen, central Iran. This research aimed to (i) compare two digital soil mapping approaches including Multinomial logistic regression and random forest, with the conventional soil mapping approach at four soil taxonomic levels (order, suborder, great group and subgroup levels), (ii) validate the predicted soil maps by the same validation data set to determine the best method for producing the soil maps, and (iii) select the best soil taxonomic level by different approaches at three sample sizes (100, 80, and 60 point observations), in two scenarios with and without a geomorphology map as a spatial covariate. In most predicted maps, using both digital soil mapping approaches, the best results were obtained using the combination of terrain attributes and the geomorphology map, although differences between the scenarios with and without the geomorphology map were not significant. Employing the geomorphology map increased map purity and the Kappa index, and led to a decrease in the 'noisiness' of soil maps. Multinomial logistic regression had better performance at higher taxonomic levels (order and suborder levels); however, random forest showed better performance at lower taxonomic levels (great group and subgroup levels). Multinomial logistic regression was less sensitive than random forest to a decrease in the number of training observations. The conventional soil mapping method produced a map with larger minimum polygon size because of traditional cartographic criteria used to make the geological map 1:100,000 (on which the conventional soil mapping map was largely based). Likewise, conventional soil mapping map had also a larger average polygon size that resulted in a lower level of detail. Multinomial logistic regression at the order level (map purity of 0.80), random forest at the suborder (map purity of 0.72) and great group level (map purity of 0.60), and conventional soil mapping at the subgroup level (map purity of 0.48) produced the most accurate maps in the study area. The multinomial logistic regression method was identified as the most effective approach based on a combined index of map purity, map information content, and map production cost. The combined index also showed that smaller sample size led to a preference for the order level, while a larger sample size led to a preference for the great group level.

  3. SweeD: likelihood-based detection of selective sweeps in thousands of genomes.

    PubMed

    Pavlidis, Pavlos; Živkovic, Daniel; Stamatakis, Alexandros; Alachiotis, Nikolaos

    2013-09-01

    The advent of modern DNA sequencing technology is the driving force in obtaining complete intra-specific genomes that can be used to detect loci that have been subject to positive selection in the recent past. Based on selective sweep theory, beneficial loci can be detected by examining the single nucleotide polymorphism patterns in intraspecific genome alignments. In the last decade, a plethora of algorithms for identifying selective sweeps have been developed. However, the majority of these algorithms have not been designed for analyzing whole-genome data. We present SweeD (Sweep Detector), an open-source tool for the rapid detection of selective sweeps in whole genomes. It analyzes site frequency spectra and represents a substantial extension of the widely used SweepFinder program. The sequential version of SweeD is up to 22 times faster than SweepFinder and, more importantly, is able to analyze thousands of sequences. We also provide a parallel implementation of SweeD for multi-core processors. Furthermore, we implemented a checkpointing mechanism that allows to deploy SweeD on cluster systems with queue execution time restrictions, as well as to resume long-running analyses after processor failures. In addition, the user can specify various demographic models via the command-line to calculate their theoretically expected site frequency spectra. Therefore, (in contrast to SweepFinder) the neutral site frequencies can optionally be directly calculated from a given demographic model. We show that an increase of sample size results in more precise detection of positive selection. Thus, the ability to analyze substantially larger sample sizes by using SweeD leads to more accurate sweep detection. We validate SweeD via simulations and by scanning the first chromosome from the 1000 human Genomes project for selective sweeps. We compare SweeD results with results from a linkage-disequilibrium-based approach and identify common outliers.

  4. Optimizing larval assessment to support sea lamprey control in the Great Lakes

    USGS Publications Warehouse

    Hansen, Michael J.; Adams, Jean V.; Cuddy, Douglas W.; Richards, Jessica M.; Fodale, Michael F.; Larson, Geraldine L.; Ollila, Dale J.; Slade, Jeffrey W.; Steeves, Todd B.; Young, Robert J.; Zerrenner, Adam

    2003-01-01

    Elements of the larval sea lamprey (Petromyzon marinus) assessment program that most strongly influence the chemical treatment program were analyzed, including selection of streams for larval surveys, allocation of sampling effort among stream reaches, allocation of sampling effort among habitat types, estimation of daily growth rates, and estimation of metamorphosis rates, to determine how uncertainty in each element influenced the stream selection program. First, the stream selection model based on current larval assessment sampling protocol significantly underestimated transforming sea lam-prey abundance, transforming sea lampreys killed, and marginal costs per sea lamprey killed, compared to a protocol that included more years of data (especially for large streams). Second, larval density in streams varied significantly with Type-I habitat area, but not with total area or reach length. Third, the ratio of larval density between Type-I and Type-II habitat varied significantly among streams, and that the optimal allocation of sampling effort varied with the proportion of habitat types and variability of larval density within each habitat. Fourth, mean length varied significantly among streams and years. Last, size at metamorphosis varied more among years than within or among regions and that metamorphosis varied significantly among streams within regions. Study results indicate that: (1) the stream selection model should be used to identify streams with potentially high residual populations of larval sea lampreys; (2) larval sampling in Type-II habitat should be initiated in all streams by increasing sampling in Type-II habitat to 50% of the sampling effort in Type-I habitat; and (3) methods should be investigated to reduce uncertainty in estimates of sea lamprey production, with emphasis on those that reduce the uncertainty associated with larval length at the end of the growing season and those used to predict metamorphosis.

  5. The genealogy of sequences containing multiple sites subject to strong selection in a subdivided population.

    PubMed Central

    Nordborg, Magnus; Innan, Hideki

    2003-01-01

    A stochastic model for the genealogy of a sample of recombining sequences containing one or more sites subject to selection in a subdivided population is described. Selection is incorporated by dividing the population into allelic classes and then conditioning on the past sizes of these classes. The past allele frequencies at the selected sites are thus treated as parameters rather than as random variables. The purpose of the model is not to investigate the dynamics of selection, but to investigate effects of linkage to the selected sites on the genealogy of the surrounding chromosomal region. This approach is useful for modeling strong selection, when it is natural to parameterize the past allele frequencies at the selected sites. Several models of strong balancing selection are used as examples, and the effects on the pattern of neutral polymorphism in the chromosomal region are discussed. We focus in particular on the statistical power to detect balancing selection when it is present. PMID:12663556

  6. The genealogy of sequences containing multiple sites subject to strong selection in a subdivided population.

    PubMed

    Nordborg, Magnus; Innan, Hideki

    2003-03-01

    A stochastic model for the genealogy of a sample of recombining sequences containing one or more sites subject to selection in a subdivided population is described. Selection is incorporated by dividing the population into allelic classes and then conditioning on the past sizes of these classes. The past allele frequencies at the selected sites are thus treated as parameters rather than as random variables. The purpose of the model is not to investigate the dynamics of selection, but to investigate effects of linkage to the selected sites on the genealogy of the surrounding chromosomal region. This approach is useful for modeling strong selection, when it is natural to parameterize the past allele frequencies at the selected sites. Several models of strong balancing selection are used as examples, and the effects on the pattern of neutral polymorphism in the chromosomal region are discussed. We focus in particular on the statistical power to detect balancing selection when it is present.

  7. Kinetic analysis of cooking losses from beef and other animal muscles heated in a water bath--effect of sample dimensions and prior freezing and ageing.

    PubMed

    Oillic, Samuel; Lemoine, Eric; Gros, Jean-Bernard; Kondjoyan, Alain

    2011-07-01

    Cooking loss kinetics were measured on cubes and parallelepipeds of beef Semimembranosus muscle ranging from 1 cm × 1 cm × 1 cm to 7 cm × 7 cm × 28 cm in size. The samples were water bath-heated at three different temperatures, i.e. 50°C, 70°C and 90°C, and for five different times. Temperatures were simulated to help interpret the results. Pre-freezing the sample, difference in ageing time, and in muscle fiber orientation had little influence on cooking losses. At longer treatment times, the effects of sample size disappeared and cooking losses depended only on the temperature. A selection of the tests was repeated on four other beef muscles and on veal, horse and lamb Semimembranosus muscle. Kinetics followed similar curves in all cases but resulted in different final water contents. The shape of the kinetics curves suggests first-order kinetics. Copyright © 2011 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.

  8. Increasing Complexity of Clinical Research in Gastroenterology: Implications for Training Clinician-Scientists

    PubMed Central

    Scott, Frank I.; McConnell, Ryan A.; Lewis, Matthew E.; Lewis, James D.

    2014-01-01

    Background Significant advances have been made in clinical and epidemiologic research methods over the past 30 years. We sought to demonstrate the impact of these advances on published research in gastroenterology from 1980 to 2010. Methods Three journals (Gastroenterology, Gut, and American Journal of Gastroenterology) were selected for evaluation given their continuous publication during the study period. Twenty original clinical articles were randomly selected from each journal from 1980, 1990, 2000, and 2010. Each article was assessed for topic studied, whether the outcome was clinical or physiologic, study design, sample size, number of authors and centers collaborating, and reporting of statistical methods such as sample size calculations, p-values, confidence intervals, and advanced techniques such as bioinformatics or multivariate modeling. Research support with external funding was also recorded. Results A total of 240 articles were included in the study. From 1980 to 2010, there was a significant increase in analytic studies (p<0.001), clinical outcomes (p=0.003), median number of authors per article (p<0.001), multicenter collaboration (p<0.001), sample size (p<0.001), and external funding (p<0.001)). There was significantly increased reporting of p-values (p=0.01), confidence intervals (p<0.001), and power calculations (p<0.001). There was also increased utilization of large multicenter databases (p=0.001), multivariate analyses (p<0.001), and bioinformatics techniques (p=0.001). Conclusions There has been a dramatic increase in complexity in clinical research related to gastroenterology and hepatology over the last three decades. This increase highlights the need for advanced training of clinical investigators to conduct future research. PMID:22475957

  9. Bias correction for selecting the minimal-error classifier from many machine learning models.

    PubMed

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Freezing of gait and fall detection in Parkinson's disease using wearable sensors: a systematic review.

    PubMed

    Silva de Lima, Ana Lígia; Evers, Luc J W; Hahn, Tim; Bataille, Lauren; Hamilton, Jamie L; Little, Max A; Okuma, Yasuyuki; Bloem, Bastiaan R; Faber, Marjan J

    2017-08-01

    Despite the large number of studies that have investigated the use of wearable sensors to detect gait disturbances such as Freezing of gait (FOG) and falls, there is little consensus regarding appropriate methodologies for how to optimally apply such devices. Here, an overview of the use of wearable systems to assess FOG and falls in Parkinson's disease (PD) and validation performance is presented. A systematic search in the PubMed and Web of Science databases was performed using a group of concept key words. The final search was performed in January 2017, and articles were selected based upon a set of eligibility criteria. In total, 27 articles were selected. Of those, 23 related to FOG and 4 to falls. FOG studies were performed in either laboratory or home settings, with sample sizes ranging from 1 PD up to 48 PD presenting Hoehn and Yahr stage from 2 to 4. The shin was the most common sensor location and accelerometer was the most frequently used sensor type. Validity measures ranged from 73-100% for sensitivity and 67-100% for specificity. Falls and fall risk studies were all home-based, including samples sizes of 1 PD up to 107 PD, mostly using one sensor containing accelerometers, worn at various body locations. Despite the promising validation initiatives reported in these studies, they were all performed in relatively small sample sizes, and there was a significant variability in outcomes measured and results reported. Given these limitations, the validation of sensor-derived assessments of PD features would benefit from more focused research efforts, increased collaboration among researchers, aligning data collection protocols, and sharing data sets.

  11. Fractionating power and outlet stream polydispersity in asymmetrical flow field-flow fractionation. Part I: isocratic operation.

    PubMed

    Williams, P Stephen

    2016-05-01

    Asymmetrical flow field-flow fractionation (As-FlFFF) has become the most commonly used of the field-flow fractionation techniques. However, because of the interdependence of the channel flow and the cross flow through the accumulation wall, it is the most difficult of the techniques to optimize, particularly for programmed cross flow operation. For the analysis of polydisperse samples, the optimization should ideally be guided by the predicted fractionating power. Many experimentalists, however, neglect fractionating power and rely on light scattering detection simply to confirm apparent selectivity across the breadth of the eluted peak. The size information returned by the light scattering software is assumed to dispense with any reliance on theory to predict retention, and any departure of theoretical predictions from experimental observations is therefore considered of no importance. Separation depends on efficiency as well as selectivity, however, and efficiency can be a strong function of retention. The fractionation of a polydisperse sample by field-flow fractionation never provides a perfectly separated series of monodisperse fractions at the channel outlet. The outlet stream has some residual polydispersity, and it will be shown in this manuscript that the residual polydispersity is inversely related to the fractionating power. Due to the strong dependence of light scattering intensity and its angular distribution on the size of the scattering species, the outlet polydispersity must be minimized if reliable size data are to be obtained from the light scattering detector signal. It is shown that light scattering detection should be used with careful control of fractionating power to obtain optimized analysis of polydisperse samples. Part I is concerned with isocratic operation of As-FlFFF, and part II with programmed operation.

  12. Petroleomics by electrospray ionization FT-ICR mass spectrometry coupled to partial least squares with variable selection methods: prediction of the total acid number of crude oils.

    PubMed

    Terra, Luciana A; Filgueiras, Paulo R; Tose, Lílian V; Romão, Wanderson; de Souza, Douglas D; de Castro, Eustáquio V R; de Oliveira, Mirela S L; Dias, Júlio C M; Poppi, Ronei J

    2014-10-07

    Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.

  13. Effective population sizes of a major vector of human diseases, Aedes aegypti.

    PubMed

    Saarman, Norah P; Gloria-Soria, Andrea; Anderson, Eric C; Evans, Benjamin R; Pless, Evlyn; Cosme, Luciano V; Gonzalez-Acosta, Cassandra; Kamgang, Basile; Wesson, Dawn M; Powell, Jeffrey R

    2017-12-01

    The effective population size ( N e ) is a fundamental parameter in population genetics that determines the relative strength of selection and random genetic drift, the effect of migration, levels of inbreeding, and linkage disequilibrium. In many cases where it has been estimated in animals, N e is on the order of 10%-20% of the census size. In this study, we use 12 microsatellite markers and 14,888 single nucleotide polymorphisms (SNPs) to empirically estimate N e in Aedes aegypti , the major vector of yellow fever, dengue, chikungunya, and Zika viruses. We used the method of temporal sampling to estimate N e on a global dataset made up of 46 samples of Ae. aegypti that included multiple time points from 17 widely distributed geographic localities. Our N e estimates for Ae. aegypti fell within a broad range (~25-3,000) and averaged between 400 and 600 across all localities and time points sampled. Adult census size (N c ) estimates for this species range between one and five thousand, so the N e / N c ratio is about the same as for most animals. These N e values are lower than estimates available for other insects and have important implications for the design of genetic control strategies to reduce the impact of this species of mosquito on human health.

  14. Learning Progress in Evolution Theory: Climbing a Ladder or Roaming a Landscape?

    ERIC Educational Resources Information Center

    Zabel, Jorg; Gropengiesser, Harald

    2011-01-01

    The objective of this naturalistic study was to explore, model and visualise the learning progress of 13-year-old students in the domain of evolution theory. Data were collected under actual classroom conditions and with a sample size of 107 learners, which followed a teaching unit on Darwin's theory of natural selection. Before and after the…

  15. Economic Analysis of Job-Related Attributes in Undergraduate Students' Initial Job Selection

    ERIC Educational Resources Information Center

    Jin, Yanhong H.; Mjelde, James W.; Litzenberg, Kerry K.

    2014-01-01

    Economic tradeoffs students place on location, salary, distances to natural resource amenities, size of the city where the job is located, and commuting times for their first college graduate job are estimated using a mixed logit model for a sample of Texas A&M University students. The Midwest is the least preferred area having a mean salary…

  16. Analysing the Opportunities and Challenges to Use of Information and Communication Technology Tools in Teaching-Learning Process

    ERIC Educational Resources Information Center

    Dastjerdi, Negin Barat

    2016-01-01

    The research aims at the evaluation of ICT use in teaching-learning process to the students of Isfahan elementary schools. The method of this research is descriptive-surveying. The statistical population of the study was all teachers of Isfahan elementary schools. The sample size was determined 350 persons that selected through cluster sampling…

  17. Home and School Environments as Determinant of Social Skills Deficit among Learners with Intellectual Disability in Lagos State

    ERIC Educational Resources Information Center

    Isawumi, Oyeyinka David; Oyundoyin, John Olusegun

    2016-01-01

    The study examined home and school environmental factors as determinant of social skills deficit among learners with intellectual disability in Lagos State, Nigeria. The study adopted survey research method using a sample size of fifty (50) pupils with intellectual disability who were purposively selected from five special primary schools in Lagos…

  18. Employment in Perspective: Women in the Labor Force. First Quarter 1988. Report 752.

    ERIC Educational Resources Information Center

    Bureau of Labor Statistics (DOL), Washington, DC.

    A special survey on employer child-care practices conducted by the Bureau of Labor Statistics (BLS) in the summer of 1987 sampled 10,345 establishments with 10 or more employees selected from the BLS establishment universe file and classified by industry and size. The survey showed that over the last decade, the number of mothers in the labor…

  19. Assessment of Leadership Training of Head Teachers and Secondary School Performance in Mubende District, Uganda

    ERIC Educational Resources Information Center

    Benson, Kayiwa

    2011-01-01

    The purpose of the study was to establish the relationship between leadership training of head teachers and school performance in secondary schools in Mubende district, Uganda. Descriptive-correlational research design was used. Six schools out of 32 were selected and the sample size of head teachers, teachers and students leaders was 287 out of…

  20. Standard-less analysis of Zircaloy clad samples by an instrumental neutron activation method

    NASA Astrophysics Data System (ADS)

    Acharya, R.; Nair, A. G. C.; Reddy, A. V. R.; Goswami, A.

    2004-03-01

    A non-destructive method for analysis of irregular shape and size samples of Zircaloy has been developed using the recently standardized k0-based internal mono standard instrumental neutron activation analysis (INAA). The samples of Zircaloy-2 and -4 tubes, used as fuel cladding in Indian boiling water reactors (BWR) and pressurized heavy water reactors (PHWR), respectively, have been analyzed. Samples weighing in the range of a few tens of grams were irradiated in the thermal column of Apsara reactor to minimize neutron flux perturbations and high radiation dose. The method utilizes in situ relative detection efficiency using the γ-rays of selected activation products in the sample for overcoming γ-ray self-attenuation. Since the major and minor constituents (Zr, Sn, Fe, Cr and/or Ni) in these samples were amenable to NAA, the absolute concentrations of all the elements were determined using mass balance instead of using the concentration of the internal mono standard. Concentrations were also determined in a smaller size Zircaloy-4 sample by irradiating in the core position of the reactor to validate the present methodology. The results were compared with literature specifications and were found to be satisfactory. Values of sensitivities and detection limits have been evaluated for the elements analyzed.

  1. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  2. Determining Cutoff Point of Ensemble Trees Based on Sample Size in Predicting Clinical Dose with DNA Microarray Data.

    PubMed

    Yılmaz Isıkhan, Selen; Karabulut, Erdem; Alpar, Celal Reha

    2016-01-01

    Background/Aim . Evaluating the success of dose prediction based on genetic or clinical data has substantially advanced recently. The aim of this study is to predict various clinical dose values from DNA gene expression datasets using data mining techniques. Materials and Methods . Eleven real gene expression datasets containing dose values were included. First, important genes for dose prediction were selected using iterative sure independence screening. Then, the performances of regression trees (RTs), support vector regression (SVR), RT bagging, SVR bagging, and RT boosting were examined. Results . The results demonstrated that a regression-based feature selection method substantially reduced the number of irrelevant genes from raw datasets. Overall, the best prediction performance in nine of 11 datasets was achieved using SVR; the second most accurate performance was provided using a gradient-boosting machine (GBM). Conclusion . Analysis of various dose values based on microarray gene expression data identified common genes found in our study and the referenced studies. According to our findings, SVR and GBM can be good predictors of dose-gene datasets. Another result of the study was to identify the sample size of n = 25 as a cutoff point for RT bagging to outperform a single RT.

  3. Boron Nanoparticles with High Hydrogen Loading: Mechanism for B-H Binding, Size Reduction, and Potential for Improved Combustibility and Specific Impulse

    DTIC Science & Technology

    2014-05-01

    particles in the sample. Mass spectrometry was, therefore, used to look for the signature of boranes in the milling jar headspace gas , and also in gases... headspace gas collected from the jar after milling in H2. For this experiment, argon was added to the initial gas mixture at a 12:1 H2:Ar ratio, in...Distribution A: approved for public release; distribution unlimited. 29    Mass spectrometry analysis. After milling selected samples, headspace gas

  4. VizieR Online Data Catalog: Galaxies and QSOs FIR size and surface brightness (Lutz+, 2016)

    NASA Astrophysics Data System (ADS)

    Lutz, D.; Berta, S.; Contursi, A.; Forster Schreiber, N. M.; Genzel, R.; Gracia-Carpio, J.; Herrera-Camus, R.; Netzer, H.; Sturm, E.; Tacconi, L. J.; Tadaki, K.; Veilleux, S.

    2016-08-01

    We use 70, 100, and 160um images from scan maps obtained with PACS on board Herschel, collecting archival data from various projects. In order to cover a wide range of galaxy properties, we first obtain an IR-selected local sample ranging from normal galaxies up to (ultra)luminous infrared galaxies. For that purpose, we searched the Herschel archive for all cz>=2000km/s objects from the IRAS Revised Bright Galaxy Sample (RBGS, Sanders et al., 2003, Cat. J/AJ/126/1607). (1 data file).

  5. Chemical and Solar Electric Propulsion Systems Analyses for Mars Sample Return Missions

    NASA Technical Reports Server (NTRS)

    Donahue, Benjamin B.; Green, Shaun E.; Coverstone, Victoria L.; Woo, Byoungsam

    2004-01-01

    Conceptual in-space transfer stages, including those utilizing solar electric propulsion, chemical propulsion, and chemical propulsion with aerobraking or aerocapture assist at Mars, were evaluated. Roundtrip Mars sample return mission vehicles were analyzed to determine how specific system technology selections influence payload delivery capability. Results show how specific engine, thruster, propellant, capture mode, trip time and launch vehicle technology choices would contribute to increasing payload or decreasing the size of the required launch vehicles. Heliocentric low-thrust trajectory analyses for Solar Electric Transfer were generated with the SEPTOP code.

  6. Total selenium in irrigation drain inflows to the Salton Sea, California, April 2009

    USGS Publications Warehouse

    May, Thomas W.; Walther, Michael J.; Saiki, Michael K.; Brumbaugh, William G.

    2009-01-01

    This report presents the results for the final sampling period (April 2009) of a 4-year monitoring program to characterize selenium concentrations in selected irrigation drains flowing into the Salton Sea, California. Total selenium and total suspended solids were determined in water samples. Total selenium, percent total organic carbon, and particle size were determined in sediments. Mean total selenium concentrations in water ranged from 0.98 to 22.9 micrograms per liter. Total selenium concentrations in sediment ranged from 0.078 to 5.0 micrograms per gram dry weight.

  7. Differences in Size Selectivity and Catch Composition Between Two Bottom Trawls Used in High-Arctic Surveys of Bottom Fishes, Crabs and Other Demersal Macrofauna

    NASA Astrophysics Data System (ADS)

    Lauth, R.; Norcross, B.; Kotwicki, S.; Britt, L.

    2016-02-01

    Long-term monitoring of the high-Arctic marine biota is needed to understand how the ecosystem is changing in response to climate change, diminishing sea-ice, and increasing anthropogenic activity. Since 1959, bottom trawls (BT) have been a primary research tool for investigating fishes, crabs and other demersal macrofauna in the high-Arctic. However, sampling gears, methodologies, and the overall survey designs used have generally lacked consistency and/or have had limited spatial coverage. This has restricted the ability of scientists and managers to effectively use existing BT survey data for investigating historical trends and zoogeographic changes in high-Arctic marine populations. Two different BTs currently being used for surveying the high-Arctic are: 1) a small-mesh 3-m plumb-staff beam trawl (PSBT), and 2) a large-mesh 83-112 Eastern bottom trawl (EBT). A paired comparison study was conducted in 2012 to compare catch composition and the sampling characteristics of the two different trawl gears, and a size selectivity ratio statistic was used to investigate how the probability of fish and crab retention differs between the EBT and PBST. Obvious contrasting characteristics of the PSBT and EBT were mesh size, area-swept, tow speed, and vertical opening. The finer mesh and harder bottom-tending characteristics of the PSBT retained juvenile fishes and other smaller macroinvertebrates and it was also more efficient catching benthic infauna that were just below the surface. The EBT had a larger net opening with greater tow duration at a higher speed that covered a potentially wider range of benthic habitats during a single tow, and it was more efficient at capturing larger and more mobile organisms, as well as organisms that were further off bottom. The ratio statistic indicated large differences in size selectivity between the two gears for both fish and crab. Results from this investigation will provide a framework for scientists and mangers to better understand how to interpret and compare data from existing PBST and EBT surveys in the high-Arctic, and the results provide information on factors worth considering in choosing what BT gear to use for a standardized long-term BT sampling program to monitor fishes, crabs and other demersal macrofauna in the high-Arctic.

  8. Replication and contradiction of highly cited research papers in psychiatry: 10-year follow-up.

    PubMed

    Tajika, Aran; Ogawa, Yusuke; Takeshima, Nozomi; Hayasaka, Yu; Furukawa, Toshi A

    2015-10-01

    Contradictions and initial overestimates are not unusual among highly cited studies. However, this issue has not been researched in psychiatry. Aims: To assess how highly cited studies in psychiatry are replicated by subsequent studies. We selected highly cited studies claiming effective psychiatric treatments in the years 2000 through 2002. For each of these studies we searched for subsequent studies with a better-controlled design, or with a similar design but a larger sample. Among 83 articles recommending effective interventions, 40 had not been subject to any attempt at replication, 16 were contradicted, 11 were found to have substantially smaller effects and only 16 were replicated. The standardised mean differences of the initial studies were overestimated by 132%. Studies with a total sample size of 100 or more tended to produce replicable results. Caution is needed when a study with a small sample size reports a large effect. © The Royal College of Psychiatrists 2015.

  9. 'Mitominis': multiplex PCR analysis of reduced size amplicons for compound sequence analysis of the entire mtDNA control region in highly degraded samples.

    PubMed

    Eichmann, Cordula; Parson, Walther

    2008-09-01

    The traditional protocol for forensic mitochondrial DNA (mtDNA) analyses involves the amplification and sequencing of the two hypervariable segments HVS-I and HVS-II of the mtDNA control region. The primers usually span fragment sizes of 300-400 bp each region, which may result in weak or failed amplification in highly degraded samples. Here we introduce an improved and more stable approach using shortened amplicons in the fragment range between 144 and 237 bp. Ten such amplicons were required to produce overlapping fragments that cover the entire human mtDNA control region. These were co-amplified in two multiplex polymerase chain reactions and sequenced with the individual amplification primers. The primers were carefully selected to minimize binding on homoplasic and haplogroup-specific sites that would otherwise result in loss of amplification due to mis-priming. The multiplexes have successfully been applied to ancient and forensic samples such as bones and teeth that showed a high degree of degradation.

  10. Clutch sizes and nests of tailed frogs from the Olympic Peninsula, Washington

    USGS Publications Warehouse

    Bury, R. Bruce; Loafman, P.; Rofkar, D.; Mike, K.

    2001-01-01

    In the summers 1995-1998, we sampled 168 streams (1,714 in of randomly selected 1-m bands) to determine distribution and abundance of stream amphibians in Olympic National Park, Washington. We found six nests (two in one stream) of the tailed frog, compared to only two nests with clutch sizes reported earlier for coastal regions. This represents only one nest per 286 in searched and one nest per 34 streams sampled. Tailed frogs occurred only in 94 (60%) of the streams and, for these waters, we found one nest per 171 in searched or one nest per 20 streams sampled. The numbers of eggs for four masses ((x) over bar = 48.3, range 40-55) were low but one single strand in a fifth nest had 96 eggs. One nest with 185 eggs likely represented communal egg deposition. Current evidence indicates a geographic trend with yearly clutches of relatively few eggs in coastal tailed frogs compared to biennial nesting with larger clutches for inland populations in the Rocky Mountains.

  11. Empirical tests of harvest-induced body-size evolution along a geographic gradient in Australian macropods.

    PubMed

    Prowse, Thomas A A; Correll, Rachel A; Johnson, Christopher N; Prideaux, Gavin J; Brook, Barry W

    2015-01-01

    Life-history theory predicts the progressive dwarfing of animal populations that are subjected to chronic mortality stress, but the evolutionary impact of harvesting terrestrial herbivores has seldom been tested. In Australia, marsupials of the genus Macropus (kangaroos and wallabies) are subjected to size-selective commercial harvesting. Mathematical modelling suggests that harvest quotas (c. 10-20% of population estimates annually) could be driving body-size evolution in these species. We tested this hypothesis for three harvested macropod species with continental-scale distributions. To do so, we measured more than 2000 macropod skulls sourced from wildlife collections spanning the last 130 years. We analysed these data using spatial Bayesian models that controlled for the age and sex of specimens as well as environmental drivers and island effects. We found no evidence for the hypothesized decline in body size for any species; rather, models that fit trend terms supported minor body size increases over time. This apparently counterintuitive result is consistent with reduced mortality due to a depauperate predator guild and increased primary productivity of grassland vegetation following European settlement in Australia. Spatial patterns in macropod body size supported the heat dissipation limit and productivity hypotheses proposed to explain geographic body-size variation (i.e. skull size increased with decreasing summer maximum temperature and increasing rainfall, respectively). There is no empirical evidence that size-selective harvesting has driven the evolution of smaller body size in Australian macropods. Bayesian models are appropriate for investigating the long-term impact of human harvesting because they can impute missing data, fit nonlinear growth models and account for non-random spatial sampling inherent in wildlife collections. © 2014 The Authors. Journal of Animal Ecology © 2014 British Ecological Society.

  12. Multi-class computational evolution: development, benchmark evaluation and application to RNA-Seq biomarker discovery.

    PubMed

    Crabtree, Nathaniel M; Moore, Jason H; Bowyer, John F; George, Nysia I

    2017-01-01

    A computational evolution system (CES) is a knowledge discovery engine that can identify subtle, synergistic relationships in large datasets. Pareto optimization allows CESs to balance accuracy with model complexity when evolving classifiers. Using Pareto optimization, a CES is able to identify a very small number of features while maintaining high classification accuracy. A CES can be designed for various types of data, and the user can exploit expert knowledge about the classification problem in order to improve discrimination between classes. These characteristics give CES an advantage over other classification and feature selection algorithms, particularly when the goal is to identify a small number of highly relevant, non-redundant biomarkers. Previously, CESs have been developed only for binary class datasets. In this study, we developed a multi-class CES. The multi-class CES was compared to three common feature selection and classification algorithms: support vector machine (SVM), random k-nearest neighbor (RKNN), and random forest (RF). The algorithms were evaluated on three distinct multi-class RNA sequencing datasets. The comparison criteria were run-time, classification accuracy, number of selected features, and stability of selected feature set (as measured by the Tanimoto distance). The performance of each algorithm was data-dependent. CES performed best on the dataset with the smallest sample size, indicating that CES has a unique advantage since the accuracy of most classification methods suffer when sample size is small. The multi-class extension of CES increases the appeal of its application to complex, multi-class datasets in order to identify important biomarkers and features.

  13. How to infer relative fitness from a sample of genomic sequences.

    PubMed

    Dayarian, Adel; Shraiman, Boris I

    2014-07-01

    Mounting evidence suggests that natural populations can harbor extensive fitness diversity with numerous genomic loci under selection. It is also known that genealogical trees for populations under selection are quantifiably different from those expected under neutral evolution and described statistically by Kingman's coalescent. While differences in the statistical structure of genealogies have long been used as a test for the presence of selection, the full extent of the information that they contain has not been exploited. Here we demonstrate that the shape of the reconstructed genealogical tree for a moderately large number of random genomic samples taken from a fitness diverse, but otherwise unstructured, asexual population can be used to predict the relative fitness of individuals within the sample. To achieve this we define a heuristic algorithm, which we test in silico, using simulations of a Wright-Fisher model for a realistic range of mutation rates and selection strength. Our inferred fitness ranking is based on a linear discriminator that identifies rapidly coalescing lineages in the reconstructed tree. Inferred fitness ranking correlates strongly with actual fitness, with a genome in the top 10% ranked being in the top 20% fittest with false discovery rate of 0.1-0.3, depending on the mutation/selection parameters. The ranking also enables us to predict the genotypes that future populations inherit from the present one. While the inference accuracy increases monotonically with sample size, samples of 200 nearly saturate the performance. We propose that our approach can be used for inferring relative fitness of genomes obtained in single-cell sequencing of tumors and in monitoring viral outbreaks. Copyright © 2014 by the Genetics Society of America.

  14. Genetic Structure, Linkage Disequilibrium and Signature of Selection in Sorghum: Lessons from Physically Anchored DArT Markers

    PubMed Central

    Bouchet, Sophie; Pot, David; Deu, Monique; Rami, Jean-François; Billot, Claire; Perrier, Xavier; Rivallan, Ronan; Gardes, Laëtitia; Xia, Ling; Wenzl, Peter; Kilian, Andrzej; Glaszmann, Jean-Christophe

    2012-01-01

    Population structure, extent of linkage disequilibrium (LD) as well as signatures of selection were investigated in sorghum using a core sample representative of worldwide diversity. A total of 177 accessions were genotyped with 1122 informative physically anchored DArT markers. The properties of DArTs to describe sorghum genetic structure were compared to those of SSRs and of previously published RFLP markers. Model-based (STRUCTURE software) and Neighbor-Joining diversity analyses led to the identification of 6 groups and confirmed previous evolutionary hypotheses. Results were globally consistent between the different marker systems. However, DArTs appeared more robust in terms of data resolution and bayesian group assignment. Whole genome linkage disequilibrium as measured by mean r2 decreased from 0.18 (between 0 to 10 kb) to 0.03 (between 100 kb to 1 Mb), stabilizing at 0.03 after 1 Mb. Effects on LD estimations of sample size and genetic structure were tested using i. random sampling, ii. the Maximum Length SubTree algorithm (MLST), and iii. structure groups. Optimizing population composition by the MLST reduced the biases in small samples and seemed to be an efficient way of selecting samples to make the best use of LD as a genome mapping approach in structured populations. These results also suggested that more than 100,000 markers may be required to perform genome-wide association studies in collections covering worldwide sorghum diversity. Analysis of DArT markers differentiation between the identified genetic groups pointed out outlier loci potentially linked to genes controlling traits of interest, including disease resistance genes for which evidence of selection had already been reported. In addition, evidence of selection near a homologous locus of FAR1 concurred with sorghum phenotypic diversity for sensitivity to photoperiod. PMID:22428056

  15. Genetic structure, linkage disequilibrium and signature of selection in Sorghum: lessons from physically anchored DArT markers.

    PubMed

    Bouchet, Sophie; Pot, David; Deu, Monique; Rami, Jean-François; Billot, Claire; Perrier, Xavier; Rivallan, Ronan; Gardes, Laëtitia; Xia, Ling; Wenzl, Peter; Kilian, Andrzej; Glaszmann, Jean-Christophe

    2012-01-01

    Population structure, extent of linkage disequilibrium (LD) as well as signatures of selection were investigated in sorghum using a core sample representative of worldwide diversity. A total of 177 accessions were genotyped with 1122 informative physically anchored DArT markers. The properties of DArTs to describe sorghum genetic structure were compared to those of SSRs and of previously published RFLP markers. Model-based (STRUCTURE software) and Neighbor-Joining diversity analyses led to the identification of 6 groups and confirmed previous evolutionary hypotheses. Results were globally consistent between the different marker systems. However, DArTs appeared more robust in terms of data resolution and bayesian group assignment. Whole genome linkage disequilibrium as measured by mean r(2) decreased from 0.18 (between 0 to 10 kb) to 0.03 (between 100 kb to 1 Mb), stabilizing at 0.03 after 1 Mb. Effects on LD estimations of sample size and genetic structure were tested using i. random sampling, ii. the Maximum Length SubTree algorithm (MLST), and iii. structure groups. Optimizing population composition by the MLST reduced the biases in small samples and seemed to be an efficient way of selecting samples to make the best use of LD as a genome mapping approach in structured populations. These results also suggested that more than 100,000 markers may be required to perform genome-wide association studies in collections covering worldwide sorghum diversity. Analysis of DArT markers differentiation between the identified genetic groups pointed out outlier loci potentially linked to genes controlling traits of interest, including disease resistance genes for which evidence of selection had already been reported. In addition, evidence of selection near a homologous locus of FAR1 concurred with sorghum phenotypic diversity for sensitivity to photoperiod.

  16. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  17. A combined Settling Tube-Photometer for rapid measurement of effective sediment particle size

    NASA Astrophysics Data System (ADS)

    Kuhn, Nikolaus J.; Kuhn, Brigitte; Rüegg, Hans-Rudolf; Zimmermann, Lukas

    2017-04-01

    Sediment and its movement in water is commonly described based on the size distribution of the mineral particles forming the sediment. While this approach works for coarse sand, pebbles and gravel, smaller particles often form aggregates, creating material of larger diameters than the mineral grain size distribution indicates, but lower densities than often assumed 2.65 g cm-3 of quartz. The measurement of the actual size and density of such aggregated sediment is difficult. For the assessment of sediment movement an effective particle size for the use in mathematical can be derived based on the settling velocity of sediment. Settling velocity of commonly measured in settling tubes which fractionate the sample in settling velocity classes by sampling material at the base in selected time intervals. This process takes up to several hours, requires a laboratory setting and carries the risk of either destruction of aggregates during transport or coagulation while sitting in rather still water. Measuring the velocity of settling particles in situ, or at least a rapidly after collection, could avoids these problems. In this study, a settling tube equipped with four photometers used to measure the darkening of a settling particle cloud is presented and the potential to improve the measurement of settling velocities are discussed.

  18. Population genetics of the cytoplasm and the units of selection on mitochondrial DNA in Drosophila melanogaster

    PubMed Central

    2011-01-01

    Biological variation exists across a nested set of hierarchical levels from nucleotides within genes to populations within species to lineages within the tree of life. How selection acts across this hierarchy is a long-standing question in evolutionary biology. Recent studies have suggested that genome size is influenced largely by the balance of selection, mutation and drift in lineages with different population sizes. Here we use population cage and maternal transmission experiments to identify the relative strength of selection at an individual and cytoplasmic level. No significant trends were observed in the frequency of large (L) and small (S) mtDNAs across 14 generations in population cages. In all replicate cages, new length variants were observed in heteroplasmic states indicating that spontaneous length mutations occurred in these experimental populations. Heteroplasmic flies carrying L genomes were more frequent than those carrying S genomes suggesting an asymmetric mutation dynamic from larger to smaller mtDNAs. Mother-offspring transmission of heteroplasmy showed that the L mtDNA increased in frequency within flies both between and within generations despite sampling drift of the same intensity as occurred in population cages. These results suggest that selection for mtDNA size is stronger at the cytoplasmic than at the organismal level. The fixation of novel mtDNAs within and between species requires a transient intracellular heteroplasmic stage. The balance of population genetic forces at the cytoplasmic and individual levels governs the units of selection on mtDNA, and has implications for evolutionary inference as well as for the effects of mtDNA mutations on fitness, disease and aging. PMID:21538136

  19. Selective interactions of trivalent cations Fe³⁺, Al³⁺ and Cr³⁺ turn on fluorescence in a naphthalimide based single molecular probe.

    PubMed

    Janakipriya, Subramaniyan; Chereddy, Narendra Reddy; Korrapati, Purnasai; Thennarasu, Sathiah; Mandal, Asit Baran

    2016-01-15

    Synthesis and fluorescence turn-on behavior of a naphthalimide based probe is described. Selective interactions of trivalent cations Fe(3+), Al(3+) or Cr(3+) with probe 1 inhibit the PET operating in the probe, and thereby, permit the detection of these trivalent cations present in aqueous samples and live cells. Failure of other trivalent cations (Eu(3+), Gd(3+) and Nb(3+)) to inhibit the PET process in 1 demonstrates the role of chelating ring size vis-à-vis ionic radius in the selective recognition of specific metal ions. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Testing for post-copulatory selection for major histocompatibility complex genotype in a semi-free-ranging primate population.

    PubMed

    Setchell, Joanna M; Abbott, Kristin M; Gonzalez, Jean-Paul; Knapp, Leslie A

    2013-10-01

    A large body of evidence suggests that major histocompatibility complex (MHC) genotype influences mate choice. However, few studies have investigated MHC-mediated post-copulatory mate choice under natural, or even semi-natural, conditions. We set out to explore this question in a large semi-free-ranging population of mandrills (Mandrillus sphinx) using MHC-DRB genotypes for 127 parent-offspring triads. First, we showed that offspring MHC heterozygosity correlates positively with parental MHC dissimilarity suggesting that mating among MHC dissimilar mates is efficient in increasing offspring MHC diversity. Second, we compared the haplotypes of the parental dyad with those of the offspring to test whether post-copulatory sexual selection favored offspring with two different MHC haplotypes, more diverse gamete combinations, or greater within-haplotype diversity. Limited statistical power meant that we could only detect medium or large effect sizes. Nevertheless, we found no evidence for selection for heterozygous offspring when parents share a haplotype (large effect size), genetic dissimilarity between parental haplotypes (we could detect an odds ratio of ≥1.86), or within-haplotype diversity (medium-large effect). These findings suggest that comparing parental and offspring haplotypes may be a useful approach to test for post-copulatory selection when matings cannot be observed, as is the case in many study systems. However, it will be extremely difficult to determine conclusively whether post-copulatory selection mechanisms for MHC genotype exist, particularly if the effect sizes are small, due to the difficulty in obtaining a sufficiently large sample. © 2013 Wiley Periodicals, Inc.

Top