Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1984-01-01
Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.
ERIC Educational Resources Information Center
Parshall, Cynthia G.; Kromrey, Jeffrey D.
1996-01-01
Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
[Practical aspects regarding sample size in clinical research].
Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S
1996-01-01
The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Hühn, M; Piepho, H P
2003-03-01
Tests for linkage are usually performed using the lod score method. A critical question in linkage analyses is the choice of sample size. The appropriate sample size depends on the desired type-I error and power of the test. This paper investigates the exact type-I error and power of the lod score method in a segregating F(2) population with co-dominant markers and a qualitative monogenic dominant-recessive trait. For illustration, a disease-resistance trait is considered, where the susceptible allele is recessive. A procedure is suggested for finding the appropriate sample size. It is shown that recessive plants have about twice the information content of dominant plants, so the former should be preferred for linkage detection. In some cases the exact alpha-values for a given nominal alpha may be rather small due to the discrete nature of the sampling distribution in small samples. We show that a gain in power is possible by using exact methods.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yulia, M.; Suhandy, D.
2018-03-01
NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
Influence of tree spatial pattern and sample plot type and size on inventory
John-Pascall Berrill; Kevin L. O' Hara
2012-01-01
Sampling with different plot types and sizes was simulated using tree location maps and data collected in three even-aged coast redwood (Sequoia sempervirens) stands selected to represent uniform, random, and clumped spatial patterns of tree locations. Fixed-radius circular plots, belt transects, and variable-radius plots were installed by...
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
Lachin, John M.; McGee, Paula L.; Greenbaum, Carla J.; Palmer, Jerry; Gottlieb, Peter; Skyler, Jay
2011-01-01
Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes. PMID:22102862
Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay
2011-01-01
Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.
Fitts, Douglas A
2017-09-21
The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, α o , constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the α o remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held α o constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ 2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ 2 . Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.
Design and analysis of three-arm trials with negative binomially distributed endpoints.
Mütze, Tobias; Munk, Axel; Friede, Tim
2016-02-20
A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.
Sample size and power for cost-effectiveness analysis (part 1).
Glick, Henry A
2011-03-01
Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.
Broberg, Per
2013-07-19
One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Hajeb, Parvaneh; Herrmann, Susan S; Poulsen, Mette E
2017-07-19
The guidance document SANTE 11945/2015 recommends that cereal samples be milled to a particle size preferably smaller than 1.0 mm and that extensive heating of the samples should be avoided. The aim of the present study was therefore to investigate the differences in milling procedures, obtained particle size distributions, and the resulting pesticide residue recovery when cereal samples were milled at the European Union National Reference Laboratories (NRLs) with their routine milling procedures. A total of 23 NRLs participated in the study. The oat and rye samples milled by each NRL were sent to the European Union Reference Laboratory on Cereals and Feedingstuff (EURL) for the determination of the particle size distribution and pesticide residue recovery. The results showed that the NRLs used several different brands and types of mills. Large variations in the particle size distributions and pesticide extraction efficiencies were observed even between samples milled by the same type of mill.
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
Sample size calculation for a proof of concept study.
Yin, Yin
2002-05-01
Sample size calculation is vital for a confirmatory clinical trial since the regulatory agencies require the probability of making Type I error to be significantly small, usually less than 0.05 or 0.025. However, the importance of the sample size calculation for studies conducted by a pharmaceutical company for internal decision making, e.g., a proof of concept (PoC) study, has not received enough attention. This article introduces a Bayesian method that identifies the information required for planning a PoC and the process of sample size calculation. The results will be presented in terms of the relationships between the regulatory requirements, the probability of reaching the regulatory requirements, the goalpost for PoC, and the sample size used for PoC.
"Adultspan" Publication Patterns: Author and Article Characteristics from 1999 to 2009
ERIC Educational Resources Information Center
Erford, Bradley T.; Clark, Kelly H.; Erford, Breann M.
2011-01-01
Publication patterns of articles in "Adultspan" from 1999 to 2009 were reviewed. Author characteristics and article content were analyzed to determine trends over time. Research articles were analyzed specifically for type of research design, classification, sampling method, types of participants, sample size, types of statistics used, and…
Analysis of $sup 239$Pu and $sup 241$Am in NAEG large-sized bovine samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Major, W.J.; Lee, K.D.; Wessman, R.A.
Methods are described for the analysis of environmental levels of $sup 239$Pu and $sup 241$Am in large-sized bovine samples. Special procedure modifications to overcome the complexities of sample preparation and analyses and special techniques employed to prepare and analyze different types of bovine samples, such as muscle, blood, liver, and bone are discussed. (CH)
Accuracy assessment of percent canopy cover, cover type, and size class
H. T. Schreuder; S. Bain; R. C. Czaplewski
2003-01-01
Truth for vegetation cover percent and type is obtained from very large-scale photography (VLSP), stand structure as measured by size classes, and vegetation types from a combination of VLSP and ground sampling. We recommend using the Kappa statistic with bootstrap confidence intervals for overall accuracy, and similarly bootstrap confidence intervals for percent...
Effects of plot size on forest-type algorithm accuracy
James A. Westfall
2009-01-01
The Forest Inventory and Analysis (FIA) program utilizes an algorithm to consistently determine the forest type for forested conditions on sample plots. Forest type is determined from tree size and species information. Thus, the accuracy of results is often dependent on the number of trees present, which is highly correlated with plot area. This research examines the...
Haverkamp, Nicolas; Beauducel, André
2017-01-01
We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.
USDA-ARS?s Scientific Manuscript database
Optimization of flour yield and quality is important in the milling industry. The objective of this study was to determine the effect of kernel size and mill type on flour yield and end-use quality. A hard red spring wheat composite sample was segregated, based on kernel size, into large, medium, ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mulford, Roberta Nancy
Particle sizes determined for a single lot of incoming Russian fuel and for a lot of fuel after aqueous processing are compared with particle sizes measured on fuel after ball-milling. The single samples of each type are believed to have particle size distributions typical of oxide from similar lots, as the processing of fuel lots is fairly uniform. Variation between lots is, as yet, uncharacterized. Sampling and particle size measurement methods are discussed elsewhere.
Morphology and FT IR spectra of porous silicon
NASA Astrophysics Data System (ADS)
Kopani, Martin; Mikula, Milan; Kosnac, Daniel; Gregus, Jan; Pincik, Emil
2017-12-01
The morphology and chemical bods of p-type and n-type porous Si was compared. The surface of n-type sample is smooth, homogenous without any features. The surface of p-type sample reveals micrometer-sized islands. FTIR investigation reveals various distribution of SiOxHy complexes in both p-and n-type samples. From the conditions leading to porous silicon layer formation (the presence of holes) we suggest both SiOxHy and SiFxHy complexes in the layer.
Neighborhood size of training data influences soil map disaggregation
USDA-ARS?s Scientific Manuscript database
Soil class mapping relies on the ability of sample locations to represent portions of the landscape with similar soil types; however, most digital soil mapping (DSM) approaches intersect sample locations with one raster pixel per covariate layer regardless of pixel size. This approach does not take ...
1980-05-01
transects extending approximately 16 kilometers from the mouth of Grays Harbor. Sub- samples were taken for grain size analysis and wood content. The...samples were thert was".d on a 1.0 mm screen to separate benthic organisms from non-living materials. Consideration of the grain size analysis ...Nutrients 17 B. Field Study 18 Methods 18 Grain Size Analysis 18 Wood Analysis 21 Wood Fragments 21 Sediment Types 21 Discussion 24 IV. BIOLOGICAL
Kristin Bunte; Steven R. Abt
2001-01-01
This document provides guidance for sampling surface and subsurface sediment from wadable gravel-and cobble-bed streams. After a short introduction to streams types and classifications in gravel-bed rivers, the document explains the field and laboratory measurement of particle sizes and the statistical analysis of particle-size distributions. Analysis of particle...
Sample-size needs for forestry herbicide trials
S.M. Zedaker; T.G. Gregoire; James H. Miller
1994-01-01
Forest herbicide experiments are increasingly being designed to evaluate smaller treatment differences when comparing existing effective treatments, tank mix ratios, surfactants, and new low-rate products. The ability to detect small differences in efficacy is dependent upon the relationship among sample size. type I and II error probabilities, and the coefficients of...
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
NASA Astrophysics Data System (ADS)
Jiang, Chengpeng; Fan, Xi'an; Hu, Jie; Feng, Bo; Xiang, Qiusheng; Li, Guangqiang; Li, Yawei; He, Zhu
2018-04-01
During the past few decades, Bi2Te3-based alloys have been investigated extensively because of their promising application in the area of low temperature waste heat thermoelectric power generation. However, their thermal stability must be evaluated to explore the appropriate service temperature. In this work, the thermal stability of zone melting p-type (Bi, Sb)2Te3-based ingots was investigated under different annealing treatment conditions. The effect of service temperature on the thermoelectric properties and hardness of the samples was also discussed in detail. The results showed that the grain size, density, dimension size and mass remained nearly unchanged when the service temperature was below 523 K, which suggested that the geometry size of zone melting p-type (Bi, Sb)2Te3-based materials was stable below 523 K. The power factor and Vickers hardness of the ingots also changed little and maintained good thermal stability. Unfortunately, the thermal conductivity increased with increasing annealing temperature, which resulted in an obvious decrease of the zT value. In addition, the thermal stabilities of the zone melting p-type (Bi, Sb)2Te3-based materials and the corresponding powder metallurgy samples were also compared. All evidence implied that the thermal stabilities of the zone-melted (ZMed) p-type (Bi, Sb)2Te3 ingots in terms of crystal structure, geometry size, power factor (PF) and hardness were better than those of the corresponding powder metallurgy samples. However, their thermal stabilities in terms of zT values were similar under different annealing temperatures.
Christopher W. Woodall; Vicente J. Monleon
2009-01-01
The Forest Inventory and Analysis program of the Forest Service, U.S. Department of Agriculture conducts a national inventory of fine woody debris (FWD); however, the sampling protocols involve tallying only the number of FWD pieces by size class that intersect a sampling transect with no measure of actual size. The line intersect estimator used with those samples...
Brownlow, H; Whitmore, I; Willan, P L
1989-01-01
Samples of human cricopharyngeus muscles obtained at postmortem were assessed for fibre type composition and fibre size. Fibre type was determined using serial cryostat sections exposed to several histochemical reactions; narrow fibre diameter and fibre area were measured from photomicrographs using a digitiser interfaced to a microcomputer. Results were compared with values from samples of vastus lateralis obtained from the same subjects. Cricopharyngeus muscle, in comparison with vastus lateralis, contained significantly more oxidative fibres but fewer glycolytic fibres and significantly more Type I fibres but fewer Type IIB. Cricopharyngeal fibres were significantly smaller than the fibres in vastus lateralis and in neither muscle were fibre sizes normally distributed. In each muscle most Type I fibres were oxidative, and the ratio of oxidative: glycolytic fibres was similar for Type IIA and IIB fibres. The fibre type proportions and fibre sizes in cricopharyngeus did not vary significantly with age or between males and females. The composition of cricopharyngeus--mostly Type I oxidative fibres and few Type II glycolytic fibres--correlated well with the functions of sustained tonicity to prevent aerophagia and occasional forceful contraction during deglutition. Images Fig. 1 PMID:2621147
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Freitag, Angelika J; Shomali, Maliheh; Michalakis, Stylianos; Biel, Martin; Siedler, Michael; Kaymakcalan, Zehra; Carpenter, John F; Randolph, Theodore W; Winter, Gerhard; Engert, Julia
2015-02-01
The potential contribution of protein aggregates to the unwanted immunogenicity of protein pharmaceuticals is a major concern. In the present study a murine monoclonal antibody was utilized to study the immunogenicity of different types of aggregates in mice. Samples containing defined types of aggregates were prepared by processes such as stirring, agitation, exposure to ultraviolet (UV) light and exposure to elevated temperatures. Aggregates were analyzed by size-exclusion chromatography, light obscuration, turbidimetry, infrared (IR) spectroscopy and UV spectroscopy. Samples were separated into fractions based on aggregate size by asymmetrical flow field-flow fractionation or by centrifugation. Samples containing different types and sizes of aggregates were subsequently administered to C57BL/6 J and BALB/c mice, and serum was analyzed for the presence of anti-IgG1, anti-IgG2a, anti-IgG2b and anti-IgG3 antibodies. In addition, the pharmacokinetic profile of the murine antibody was investigated. In this study, samples containing high numbers of different types of aggregates were administered in order to challenge the in vivo system. The magnitude of immune response depends on the nature of the aggregates. The most immunogenic aggregates were of relatively large and insoluble nature, with perturbed, non-native structures. This study shows that not all protein drug aggregates are equally immunogenic.
NASA Astrophysics Data System (ADS)
Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc
2012-11-01
Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling curves before explaining differences in diversity.
A Comparison of Learning Cultures in Different Sizes and Types
ERIC Educational Resources Information Center
Brown, Paula D.; Finch, Kim S.; MacGregor, Cynthia
2012-01-01
This study compared relevant data and information about leadership and learning cultures in different sizes and types of high schools. Research was conducted using a quantitative design with a qualitative element. Quantitative data were gathered using a researcher-created survey. Independent sample t-tests were conducted to analyze the means of…
The effect of precursor types on the magnetic properties of Y-type hexa-ferrite composite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Chin Mo; Na, Eunhye; Kim, Ingyu
2015-05-07
With magnetic composite including uniform magnetic particles, we expect to realize good high-frequency soft magnetic properties. We produced needle-like (α-FeOOH) nanoparticles with nearly uniform diameter and length of 20 and 500 nm. Zn-doped Y-type hexa-ferrite samples were prepared by solid state reaction method using the uniform goethite and non-uniform hematite (Fe{sub 2}O{sub 3}) with size of <1 μm, respectively. The micrographs observed by scanning electron microscopy show that more uniform hexagonal plates are observed in ZYG-sample (Zn-doped Y-type hexa-ferrite prepared with non-uniform hematite) than in ZYH-sample (Zn-doped Y-type hexa-ferrite prepared with uniform goethite). The permeability (μ′) and loss tangent (δ) atmore » 2 GHz are 2.31 and 0.07 in ZYG-sample and 2.0 and 0.07 in ZYH sample, respectively. We can observe that permeability and loss tangent are strongly related to the particle size and uniformity based on the nucleation, growth, and two magnetizing mechanisms: spin rotation and domain wall motion. The complex permeability spectra also can be numerically separated into spin rotational and domain wall resonance components.« less
Gray, Peter B; Frederick, David A
2012-09-06
We investigated body image in St. Kitts, a Caribbean island where tourism, international media, and relatively high levels of body fat are common. Participants were men and women recruited from St. Kitts (n = 39) and, for comparison, U.S. samples from universities (n = 618) and the Internet (n = 438). Participants were shown computer generated images varying in apparent body fat level and muscularity or breast size and they indicated their body type preferences and attitudes. Overall, there were only modest differences in body type preferences between St. Kitts and the Internet sample, with the St. Kitts participants being somewhat more likely to value heavier women. Notably, however, men and women from St. Kitts were more likely to idealize smaller breasts than participants in the U.S. samples. Attitudes regarding muscularity were generally similar across samples. This study provides one of the few investigations of body preferences in the Caribbean.
Pye, Kenneth; Blott, Simon J
2004-08-11
Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.
Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
Xiong, Xiaoping; Wu, Jianrong
2017-01-01
The treatment of cancer has progressed dramatically in recent decades, such that it is no longer uncommon to see a cure or log-term survival in a significant proportion of patients with various types of cancer. To adequately account for the cure fraction when designing clinical trials, the cure models should be used. In this article, a sample size formula for the weighted log-rank test is derived under the fixed alternative hypothesis for the proportional hazards cure models. Simulation showed that the proposed sample size formula provides an accurate estimation of sample size for designing clinical trials under the proportional hazards cure models. Copyright © 2016 John Wiley & Sons, Ltd.
Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H
2015-11-30
We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Size and modal analyses of fines and ultrafines from some Apollo 17 samples
NASA Technical Reports Server (NTRS)
Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.
1975-01-01
Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.
How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation
ERIC Educational Resources Information Center
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard
2006-01-01
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James
2010-10-01
The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.
Harrell-Williams, Leigh; Wolfe, Edward W
2014-01-01
Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.
Recent Structural Evolution of Early-Type Galaxies: Size Growth from z = 1 to z = 0
NASA Astrophysics Data System (ADS)
van der Wel, Arjen; Holden, Bradford P.; Zirm, Andrew W.; Franx, Marijn; Rettura, Alessandro; Illingworth, Garth D.; Ford, Holland C.
2008-11-01
Strong size and internal density evolution of early-type galaxies between z ~ 2 and the present has been reported by several authors. Here we analyze samples of nearby and distant (z ~ 1) galaxies with dynamically measured masses in order to confirm the previous, model-dependent results and constrain the uncertainties that may play a role. Velocity dispersion (σ) measurements are taken from the literature for 50 morphologically selected 0.8 < z < 1.2 field and cluster early-type galaxies with typical masses Mdyn = 2 × 1011 M⊙. Sizes (Reff) are determined with Advanced Camera for Surveys imaging. We compare the distant sample with a large sample of nearby (0.04 < z < 0.08) early-type galaxies extracted from the Sloan Digital Sky Survey for which we determine sizes, masses, and densities in a consistent manner, using simulations to quantify systematic differences between the size measurements of nearby and distant galaxies. We find a highly significant difference between the σ - Reff distributions of the nearby and distant samples, regardless of sample selection effects. The implied evolution in Reff at fixed mass between z = 1 and the present is a factor of 1.97 +/- 0.15. This is in qualitative agreement with semianalytic models; however, the observed evolution is much faster than the predicted evolution. Our results reinforce and are quantitatively consistent with previous, photometric studies that found size evolution of up to a factor of 5 since z ~ 2. A combination of structural evolution of individual galaxies through the accretion of companions and the continuous formation of early-type galaxies through increasingly gas-poor mergers is one plausible explanation of the observations. Based on observations with the Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555, and observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under NASA contract 1407. Based on observations collected at the European Southern Observatory, Chile (169.A-0458). Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.
Analysis of hard coal quality for narrow size fraction under 20 mm
NASA Astrophysics Data System (ADS)
Niedoba, Tomasz; Pięta, Paulina
2018-01-01
The paper presents the results of an analysis of hard coal quality diversion in narrow size fraction by using taxonomic methods. Raw material samples were collected in selected mines of Upper Silesian Industrial Region and they were classified according to the Polish classification as types 31, 34.2 and 35. Then, each size fraction was characterized in terms of the following properties: density, ash content, calorific content, volatile content, total sulfur content and analytical moisture. As a result of the analysis it can be stated that the best quality in the entire range of the tested size fractions was the 34.2 coking coal type. At the same time, in terms of price parameters, high quality of raw material characterised the following size fractions: 0-6.3 mm of 31 energetic coal type and 0-3.15 mm of 35 coking coal type. The methods of grouping (Ward's method) and agglomeration (k-means method) have shown that the size fraction below 10 mm was characterized by higher quality in all the analyzed hard coal types. However, the selected taxonomic methods do not make it possible to identify individual size fraction or hard coal types based on chosen parameters.
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Paul F. Hessburg; Bradley G. Smith; Scott D. Kreiter; Craig A. Miller; Cecilia H. McNicoll; Michele. Wasienko-Holland
2000-01-01
In the interior Columbia River basin midscale ecological assessment, we mapped and characterized historical and current vegetation composition and structure of 337 randomly sampled subwatersheds (9500 ha average size) in 43 subbasins (404 000 ha average size). We compared landscape patterns, vegetation structure and composition, and landscape vulnerability to wildfires...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, T.A.
1990-01-01
A study undertaken on an Eocene age coal bed in southeast Kalimantan, Indonesia determined that there was a relationship between megascopically determined coal types and kinds and sizes of organic components. The study also concluded that the most efficient way to characterize the seam was from collection of two 3 cm blocks from each layer or bench defined by megascopic character and that a maximum of 125 point counts was needed on each block. Microscopic examination of uncrushed block samples showed the coal to be composed of plant parts and tissues set in a matrix of both fine-grained and amorphousmore » material. The particulate matrix is composed of cell wall and liptinite fragments, resins, spores, algae, and fungal material. The amorphous matrix consists of unstructured (at 400x) huminite and liptinite. Size measurements showed that each particulate component possessed its own size distribution which approached normality when transformed to a log{sub 2} or phi scale. Degradation of the plant material during peat accumulation probably controlled grain size in the coal types. This notion is further supported by the increased concentration of decay resistant resin and cell fillings in the nonbanded and dull coal types. In the sampling design experiment, two blocks from each layer and two layers from each coal type were collected. On each block, 2 to 4 traverses totaling 500 point counts per block were performed to test the minimum number of points needed to characterize a block. A hierarchical analysis of variance showed that most of the petrographic variation occurred between coal types. The results from these analyses also indicated that, within a coal type, sampling should concentrate on the layer level and that only 250 point counts, split between two blocks, were needed to characterize a layer.« less
An audit of the statistics and the comparison with the parameter in the population
NASA Astrophysics Data System (ADS)
Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad
2015-10-01
The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.
Heinz, Marlen; Zak, Dominik
2018-03-01
This study aimed to evaluate the effects of freezing and cold storage at 4 °C on bulk dissolved organic carbon (DOC) and nitrogen (DON) concentration and SEC fractions determined with size exclusion chromatography (SEC), as well as on spectral properties of dissolved organic matter (DOM) analyzed with fluorescence spectroscopy. In order to account for differences in DOM composition and source we analyzed storage effects for three different sample types, including a lake water sample representing freshwater DOM, a leaf litter leachate of Phragmites australis representing a terrestrial, 'fresh' DOM source and peatland porewater samples. According to our findings one week of cold storage can bias DOC and DON determination. Overall, the determination of DOC and DON concentration with SEC analysis for all three sample types were little susceptible to alterations due to freezing. The findings derived for the sampling locations investigated here may not apply for other sampling locations and/or sample types. However, DOC size fractions and DON concentration of formerly frozen samples should be interpreted with caution when sample concentrations are high. Alteration of some optical properties (HIX and SUVA 254 ) due to freezing were evident, and therefore we recommend immediate analysis of samples for spectral analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.
How Methodological Features Affect Effect Sizes in Education
ERIC Educational Resources Information Center
Cheung, Alan; Slavin, Robert
2016-01-01
As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. The purpose of this study was to examine how methodological features such as types of publication, sample sizes, and…
Moerbeek, Mirjam
2018-01-01
Background This article studies the design of trials that compare three treatment conditions that are delivered by two types of health professionals. The one type of health professional delivers one treatment, and the other type delivers two treatments, hence, this design is a combination of a nested and crossed design. As each health professional treats multiple patients, the data have a nested structure. This nested structure has thus far been ignored in the design of such trials, which may result in an underestimate of the required sample size. In the design stage, the sample sizes should be determined such that a desired power is achieved for each of the three pairwise comparisons, while keeping costs or sample size at a minimum. Methods The statistical model that relates outcome to treatment condition and explicitly takes the nested data structure into account is presented. Mathematical expressions that relate sample size to power are derived for each of the three pairwise comparisons on the basis of this model. The cost-efficient design achieves sufficient power for each pairwise comparison at lowest costs. Alternatively, one may minimize the total number of patients. The sample sizes are found numerically and an Internet application is available for this purpose. The design is also compared to a nested design in which each health professional delivers just one treatment. Results Mathematical expressions show that this design is more efficient than the nested design. For each pairwise comparison, power increases with the number of health professionals and the number of patients per health professional. The methodology of finding a cost-efficient design is illustrated using a trial that compares treatments for social phobia. The optimal sample sizes reflect the costs for training and supervising psychologists and psychiatrists, and the patient-level costs in the three treatment conditions. Conclusion This article provides the methodology for designing trials that compare three treatment conditions while taking the nesting of patients within health professionals into account. As such, it helps to avoid underpowered trials. To use the methodology, a priori estimates of the total outcome variances and intraclass correlation coefficients must be obtained from experts’ opinions or findings in the literature. PMID:29316807
Magnetic properties of Apollo 14 breccias and their correlation with metamorphism.
NASA Technical Reports Server (NTRS)
Gose, W. A.; Pearce, G. W.; Strangway, D. W.; Larson, E. E.
1972-01-01
The magnetic properties of Apollo 14 breccias can be explained in terms of the grain size distribution of the interstitial iron which is directly related to the metamorphic grade of the sample. In samples 14049 and 14313 iron grains less than 500 A in diameter are dominant as evidenced by a Richter-type magnetic aftereffect and hysteresis measurements. Both samples are of lowest metamorphic grade. The medium metamorphic-grade sample 14321 and the high-grade sample 14312 both show a logarithmic time-dependence of the magnetization indicative of a wide range of relaxation times and thus grain sizes, but sample 14321 contains a stable remanent magnetization whereas sample 14312 does not. This suggests that small multidomain particles (less than 1 micron) are most abundant in sample 14321 while sample 14312 is magnetically controlled by grains greater than 1 micron. The higher the metamorphic grade, the larger the grain size of the iron controlling the magnetic properties.
Size dependence of chondrule textural types
NASA Technical Reports Server (NTRS)
Goswami, J. N.
1984-01-01
Chrondrule textural types were studied for size sorted chondrules from the ordinary chondrites Dhajala, Eston and Chainpur and the CM chondrite Murchison. Aliquot samples from size sorted Dhajala chondrules were studied for their oxygen isotopic composition and chondrules from Weston were studied for their precompaction irradiation records by nuclear track technique. Correlations between chondrule textural types and oxygen isotope or track data were identified. A distinct dependence of chondrule textural type on chondrule size was evident in the data for both Dhajala and Weston chondrules. No significant deviation was noticed in the abundance pattern of nonporphyritic chondrules within individual size fractions in the 200 to 800 micron size interval. Overabundance is found of nonporphyritic chondrules in the 100 to 200 micron size fraction of Murchison chondrules, the trend is not as distinct for Chainpur chondrules. Two hundred microns is suggested as the cutoff size below which radiative cooling is extremely efficient during the chondrule forming process. It is suggested that this offers a possibility for use of physical and chemical characteristics of small chondrules to constrain the temperature history during the chondrule formation process.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
ERIC Educational Resources Information Center
Peng, Peng; Namkung, Jessica; Barnes, Marcia; Sun, Congying
2016-01-01
The purpose of this meta-analysis was to determine the relation between mathematics and working memory (WM) and to identify possible moderators of this relation including domains of WM, types of mathematics skills, and sample type. A meta-analysis of 110 studies with 829 effect sizes found a significant medium correlation of mathematics and WM, r…
3D-HST+CANDELS: The Evolution of the Galaxy Size-Mass Distribution since z = 3
NASA Astrophysics Data System (ADS)
van der Wel, A.; Franx, M.; van Dokkum, P. G.; Skelton, R. E.; Momcheva, I. G.; Whitaker, K. E.; Brammer, G. B.; Bell, E. F.; Rix, H.-W.; Wuyts, S.; Ferguson, H. C.; Holden, B. P.; Barro, G.; Koekemoer, A. M.; Chang, Yu-Yen; McGrath, E. J.; Häussler, B.; Dekel, A.; Behroozi, P.; Fumagalli, M.; Leja, J.; Lundgren, B. F.; Maseda, M. V.; Nelson, E. J.; Wake, D. A.; Patel, S. G.; Labbé, I.; Faber, S. M.; Grogin, N. A.; Kocevski, D. D.
2014-06-01
Spectroscopic+photometric redshifts, stellar mass estimates, and rest-frame colors from the 3D-HST survey are combined with structural parameter measurements from CANDELS imaging to determine the galaxy size-mass distribution over the redshift range 0 < z < 3. Separating early- and late-type galaxies on the basis of star-formation activity, we confirm that early-type galaxies are on average smaller than late-type galaxies at all redshifts, and we find a significantly different rate of average size evolution at fixed galaxy mass, with fast evolution for the early-type population, R effvprop(1 + z)-1.48, and moderate evolution for the late-type population, R effvprop(1 + z)-0.75. The large sample size and dynamic range in both galaxy mass and redshift, in combination with the high fidelity of our measurements due to the extensive use of spectroscopic data, not only fortify previous results but also enable us to probe beyond simple average galaxy size measurements. At all redshifts the slope of the size-mass relation is shallow, R_{eff}\\propto M_*^{0.22}, for late-type galaxies with stellar mass >3 × 109 M ⊙, and steep, R_{eff}\\propto M_*^{0.75}, for early-type galaxies with stellar mass >2 × 1010 M ⊙. The intrinsic scatter is lsim0.2 dex for all galaxy types and redshifts. For late-type galaxies, the logarithmic size distribution is not symmetric but is skewed toward small sizes: at all redshifts and masses, a tail of small late-type galaxies exists that overlaps in size with the early-type galaxy population. The number density of massive (~1011 M ⊙), compact (R eff < 2 kpc) early-type galaxies increases from z = 3 to z = 1.5-2 and then strongly decreases at later cosmic times.
Statistical Analysis Techniques for Small Sample Sizes
NASA Technical Reports Server (NTRS)
Navard, S. E.
1984-01-01
The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.
Automatic classification techniques for type of sediment map from multibeam sonar data
NASA Astrophysics Data System (ADS)
Zakariya, R.; Abdullah, M. A.; Che Hasan, R.; Khalil, I.
2018-02-01
Sediment map can be important information for various applications such as oil drilling, environmental and pollution study. A study on sediment mapping was conducted at a natural reef (rock) in Pulau Payar using Sound Navigation and Ranging (SONAR) technology which is Multibeam Echosounder R2-Sonic. This study aims to determine sediment type by obtaining backscatter and bathymetry data from multibeam echosounder. Ground truth data were used to verify the classification produced. The method used to analyze ground truth samples consists of particle size analysis (PSA) and dry sieving methods. Different analysis being carried out due to different sizes of sediment sample obtained. The smaller size was analyzed using PSA with the brand CILAS while bigger size sediment was analyzed using sieve. For multibeam, data acquisition includes backscatter strength and bathymetry data were processed using QINSy, Qimera, and ArcGIS. This study shows the capability of multibeam data to differentiate the four types of sediments which are i) very coarse sand, ii) coarse sand, iii) very coarse silt and coarse silt. The accuracy was reported as 92.31% overall accuracy and 0.88 kappa coefficient.
Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828
ERIC Educational Resources Information Center
Erford, Bradley T.; Giguere, Monica; Glenn, Kacie; Ciarlone, Hallie
2015-01-01
Patterns of articles published in "Professional School Counseling" (PSC) from the first 15 volumes were reviewed in this meta-study. Author characteristics (e.g., sex, employment setting, nation of domicile) and article characteristics (e.g., topic, type, design, sample, sample size, participant type, statistical procedures and…
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.
Sample size considerations for clinical research studies in nuclear cardiology.
Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J
2015-12-01
Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.
Estimation of Effect Size from a Series of Experiments Involving Paired Comparisons.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
1993-01-01
A distribution theory is derived for a G. V. Glass-type (1976) estimator of effect size from studies involving paired comparisons. The possibility of combining effect sizes from studies involving a mixture of related and unrelated samples is also explored. Resulting estimates are illustrated using data from previous psychiatric research. (SLD)
Galaxy evolution by color-log(n) type since redshift unity in the Hubble Ultra Deep Field
NASA Astrophysics Data System (ADS)
Cameron, E.; Driver, S. P.
2009-01-01
Aims: We explore the use of the color-log(n) (where n is the global Sérsic index) plane as a tool for subdividing the galaxy population in a physically-motivated manner out to redshift unity. We thereby aim to quantify surface brightness evolution by color-log(n) type, accounting separately for the specific selection and measurement biases against each. Methods: We construct (u-r) color-log(n) diagrams for distant galaxies in the Hubble Ultra Deep Field (UDF) within a series of volume-limited samples to z=1.5. The color-log(n) distributions of these high redshift galaxies are compared against that measured for nearby galaxies in the Millennium Galaxy Catalogue (MGC), as well as to the results of visual morphological classification. Based on this analysis we divide our sample into three color-structure classes. Namely, “red, compact”, “blue, diffuse” and “blue, compact”. Luminosity-size diagrams are constructed for members of the two largest classes (“red, compact” and “blue, diffuse”), both in the UDF and the MGC. Artificial galaxy simulations (for systems with exponential and de Vaucouleurs profile shapes alternately) are used to identify “bias-free” regions of the luminosity-size plane in which galaxies are detected with high completeness, and their fluxes and sizes recovered with minimal surface brightness-dependent biases. Galaxy evolution is quantified via comparison of the low and high redshift luminosity-size relations within these “bias-free” regions. Results: We confirm the correlation between color-log(n) plane position and visual morphological type observed locally and in other high redshift studies in the color and/or structure domain. The combined effects of observational uncertainties, the morphological K-correction and cosmic variance preclude a robust statistical comparison of the shape of the MGC and UDF color-log(n) distributions. However, in the interval 0.75 < z <1.0 where the UDF i-band samples close to rest-frame B-band light (i.e., the morphological K-correction between our samples is negligible) we are able to present tentative evidence of bimodality, albiet for a very small sample size (17 galaxies). Our unique approach to quantifying selection and measurement biases in the luminosity-size plane highlights the need to consider errors in the recovery of both magnitudes and sizes, and their dependence on profile shape. Motivated by these results we divide our sample into the three color-structure classes mentioned above and quantify luminosity-size evolution by galaxy type. Specifically, we detect decreases in B-band, surface brightness of 1.57 ± 0.22 mag arcsec-2 and 1.65 ± 0.22 mag arcsec-2 for our “blue, diffuse” and “red, compact” classes respectively between redshift unity and the present day.
NASA Astrophysics Data System (ADS)
Venero, I. M.; Mayol-Bracero, O. L.; Anderson, J. R.
2012-12-01
As part of the Puerto Rican African Dust and Cloud Study (PRADACS) and the Ice in Clouds Experiment - Tropical (ICE-T), we sampled giant airborne particles to study their elemental composition, morphology, and size distributions. Samples were collected in July 2011 during field measurements performed by NCAR's C-130 aircraft based on St Croix, U.S Virgin Island. The results presented here correspond to the measurements done during research flight #8 (RF8). Aerosol particles with Dp > 1 um were sampled with the Giant Nuclei Impactor and particles with Dp < 1 um were collected with the Wyoming Inlet. Collected particles were later analyzed using an automated scanning electron microscope (SEM) and manual observation by field emission SEM. We identified the chemical composition and morphology of major particle types in filter samples collected at different altitudes (e.g., 300 ft, 1000 ft, and 4500ft). Results from the flight upwind of Puerto Rico show that particles in the giant nuclei size range are dominated by sea salt. Samples collected at altitudes 300 ft and 1000 ft showed the highest number of sea salt particles and the samples collected at higher altitudes (> 4000 ft) showed the highest concentrations of clay material. HYSPLIT back trajectories for all samples showed that the low altitude samples initiated in the free troposphere in the Atlantic Ocean, which may account for the high sea salt content and that the source of the high altitude samples was closer to the Saharan - Sahel desert region and, therefore, these samples possibly had the influence of African dust. Size distribution results for quartz and unreacted sea-salt aerosols collected on the Giant Nuclei Impactor showed that sample RF08 - 12:05 UTM (300 ft) had the largest size value (mean = 2.936 μm) than all the other samples. Additional information was also obtained from the Wyoming Inlet present at the C - 130 aircraft which showed that size distribution results for all particles were smaller in size. The different mineral components of the dust have different size distributions so that a fractionation process could occur during transport. Also, the presence of supermicron sea salt at altitude is important for cloud processes.
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
NASA Astrophysics Data System (ADS)
Atapour, Hadi; Mortazavi, Ali
2018-04-01
The effects of textural characteristics, especially grain size, on index properties of weakly solidified artificial sandstones are studied. For this purpose, a relatively large number of laboratory tests were carried out on artificial sandstones that were produced in the laboratory. The prepared samples represent fifteen sandstone types consisting of five different median grain sizes and three different cement contents. Indices rock properties including effective porosity, bulk density, point load strength index, and Schmidt hammer values (SHVs) were determined. Experimental results showed that the grain size has significant effects on index properties of weakly solidified sandstones. The porosity of samples is inversely related to the grain size and decreases linearly as grain size increases. While a direct relationship was observed between grain size and dry bulk density, as bulk density increased with increasing median grain size. Furthermore, it was observed that the point load strength index and SHV of samples increased as a result of grain size increase. These observations are indirectly related to the porosity decrease as a function of median grain size.
Ma, Li-Xin; Liu, Jian-Ping
2012-01-01
To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.
The impact of sample non-normality on ANOVA and alternative methods.
Lantz, Björn
2013-05-01
In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.
Dennis M. May
1990-01-01
The procedures by which the Southern Forest Inventory and Analysis unit calculates stocking from tree data collected on inventory sample plots are described in this report. Stocking is then used to ascertain two other important stand descriptors: forest type and stand size class. Inventory data for three plots from the recently completed 1989 Tennessee survey are used...
Upward counterfactual thinking and depression: A meta-analysis.
Broomhall, Anne Gene; Phillips, Wendy J; Hine, Donald W; Loi, Natasha M
2017-07-01
This meta-analysis examined the strength of association between upward counterfactual thinking and depressive symptoms. Forty-two effect sizes from a pooled sample of 13,168 respondents produced a weighted average effect size of r=.26, p<.001. Moderator analyses using an expanded set of 96 effect sizes indicated that upward counterfactuals and regret produced significant positive effects that were similar in strength. Effects also did not vary as a function of the theme of the counterfactual-inducing situation or study design (cross-sectional versus longitudinal). Significant effect size heterogeneity was observed across sample types, methods of assessing upward counterfactual thinking, and types of depression scale. Significant positive effects were found in studies that employed samples of bereaved individuals, older adults, terminally ill patients, or university students, but not adolescent mothers or mixed samples. Both number-based and Likert-based upward counterfactual thinking assessments produced significant positive effects, with the latter generating a larger effect. All depression scales produced significant positive effects, except for the Psychiatric Epidemiology Research Interview. Research and theoretical implications are discussed in relation to cognitive theories of depression and the functional theory of upward counterfactual thinking, and important gaps in the extant research literature are identified. Copyright © 2017 Elsevier Ltd. All rights reserved.
Worthen, Wade B; Horacek, Henry Joseph
2015-01-01
Dragonfly larvae were sampled in Little Creek, Greenville, SC. The distributions of five common species were described relative to sediment type, body size, and the presence of other larvae. In total, 337 quadrats (1 m by 0.5 m) were sampled by kick seine. For each quadrat, the substrate was classified as sand, sand-cobble mix, cobble, coarse, or rock, and water depth and distance from bank were measured. Larvae were identified to species, and the lengths of the body, head, and metafemur were measured. Species were distributed differently across sediment types: sanddragons, Progomphus obscurus (Rambur) (Odonata: Gomphidae), were common in sand; twin-spotted spiketails, Cordulegaster maculata Selys (Odonata: Cordulegastridae), preferred a sand-cobble mix; Maine snaketails, Ophiogomphus mainensis Packard (Odonata: Gomphidae), preferred cobble and coarse sediments; fawn darners, Boyeria vinosa (Say) (Odonata: Aeshnidae), preferred coarse sediments; and Eastern least clubtails, Stylogomphus albistylus (Hagen) (Odonata: Gomphidae), preferred coarse and rock sediments. P. obscurus and C. maculata co-occurred more frequently than expected by chance, as did O. mainensis, B. vinosa, and S. albistylus. Mean size varied among species, and species preferences contributed to differences in mean size across sediment types. There were significant negative associations among larval size classes: small larvae (<12 mm) occurred less frequently with large larvae (>15 mm) than expected by chance, and large larvae were alone in quadrats more frequently than other size classes. Species may select habitats at a large scale based on sediment type and their functional morphology, but small scale distributions are consistent with competitive displacement or intraguild predation. © The Author 2015. Published by Oxford University Press on behalf of the Entomological Society of America.
Size-biased distributions in the generalized beta distribution family, with applications to forestry
Mark J. Ducey; Jeffrey H. Gove
2015-01-01
Size-biased distributions arise in many forestry applications, as well as other environmental, econometric, and biomedical sampling problems. We examine the size-biased versions of the generalized beta of the first kind, generalized beta of the second kind and generalized gamma distributions. These distributions include, as special cases, the Dagum (Burr Type III),...
The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
Bhaskar, Anand; Song, Yun S
2014-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.
Bhaskar, Anand; Song, Yun S.
2016-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricciardelli, Elena; Tamone, Amelie; Cava, Antonio
We explore the morphology of galaxies living in the proximity of cosmic voids, using a sample of voids identified in the Sloan Digital Sky Survey Data Release 7. At all stellar masses, void galaxies exhibit morphologies of a later type than galaxies in a control sample, which represent galaxies in an average density environment. We interpret this trend as a pure environmental effect, independent of the mass bias, due to a slower galaxy build-up in the rarefied regions of voids. We confirm previous findings about a clear segregation in galaxy morphology, with galaxies of a later type being found atmore » smaller void-centric distances with respect to the early-type galaxies. We also show, for the first time, that the radius of the void has an impact on the evolutionary history of the galaxies that live within it or in its surroundings. In fact, an enhanced fraction of late-type galaxies is found in the proximity of voids larger than the median void radius. Likewise, an excess of early-type galaxies is observed within or around voids of a smaller size. A significant difference in galaxy properties in voids of different sizes is observed up to 2 R {sub void}, which we define as the region of influence of voids. The significance of this difference is greater than 3 σ for all the volume-complete samples considered here. The fraction of star-forming galaxies shows the same behavior as the late-type galaxies, but no significant difference in stellar mass is observed in the proximity of voids of different sizes.« less
Morphological Segregation in the Surroundings of Cosmic Voids
NASA Astrophysics Data System (ADS)
Ricciardelli, Elena; Cava, Antonio; Varela, Jesus; Tamone, Amelie
2017-09-01
We explore the morphology of galaxies living in the proximity of cosmic voids, using a sample of voids identified in the Sloan Digital Sky Survey Data Release 7. At all stellar masses, void galaxies exhibit morphologies of a later type than galaxies in a control sample, which represent galaxies in an average density environment. We interpret this trend as a pure environmental effect, independent of the mass bias, due to a slower galaxy build-up in the rarefied regions of voids. We confirm previous findings about a clear segregation in galaxy morphology, with galaxies of a later type being found at smaller void-centric distances with respect to the early-type galaxies. We also show, for the first time, that the radius of the void has an impact on the evolutionary history of the galaxies that live within it or in its surroundings. In fact, an enhanced fraction of late-type galaxies is found in the proximity of voids larger than the median void radius. Likewise, an excess of early-type galaxies is observed within or around voids of a smaller size. A significant difference in galaxy properties in voids of different sizes is observed up to 2 R void, which we define as the region of influence of voids. The significance of this difference is greater than 3σ for all the volume-complete samples considered here. The fraction of star-forming galaxies shows the same behavior as the late-type galaxies, but no significant difference in stellar mass is observed in the proximity of voids of different sizes.
Explanation of Two Anomalous Results in Statistical Mediation Analysis.
Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P
2012-01-01
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.
NASA Astrophysics Data System (ADS)
Kamlangkeng, Poramate; Asa, Prateepasen; Mai, Noipitak
2014-06-01
Digital radiographic testing is an acceptable premature nondestructive examination technique. Its performance and limitation comparing to the old technique are still not widely well known. In this paper conducted the study on the comparison of the accuracy of the defect size measurement and film quality obtained from film and digital radiograph techniques by testing in specimens and known size sample defect. Initially, one specimen was built with three types of internal defect; which are longitudinal cracking, lack of fusion, and porosity. For the known size sample defect, it was machined various geometrical size for comparing the accuracy of the measuring defect size to the real size in both film and digital images. To compare the image quality by considering at smallest detectable wire and the three defect images. In this research used Image Quality Indicator (IQI) of wire type 10/16 FE EN BS EN-462-1-1994. The radiographic films were produced by X-ray and gamma ray using Kodak AA400 size 3.5x8 inches, while the digital images were produced by Fuji image plate type ST-VI with 100 micrometers resolution. During the tests, a radiator GE model MF3 was implemented. The applied energy is varied from 120 to 220 kV and the current from 1.2 to 3.0 mA. The intensity of Iridium 192 gamma ray is in the range of 24-25 Curie. Under the mentioned conditions, the results showed that the deviation of the defect size measurement comparing to the real size obtained from the digital image radiographs is below than that of the film digitized, whereas the quality of film digitizer radiographs is higher in comparison.
Šmarda, Petr; Bureš, Petr; Horová, Lucie
2007-01-01
Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968
Durand, Casey P
2013-01-01
Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
How large a training set is needed to develop a classifier for microarray data?
Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M
2008-01-01
A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.
NASA Astrophysics Data System (ADS)
Kumar, S.; Aggarwal, S. G.; Fu, P. Q.; Kang, M.; Sarangi, B.; Sinha, D.; Kotnala, R. K.
2017-06-01
During March 20-22, 2012 Delhi experienced a massive dust-storm which originated in Middle-East. Size segregated sampling of these dust aerosols was performed using a nine staged Andersen sampler (5 sets of samples were collected including before dust-storm (BDS)), dust-storm day 1 to 3 (DS1 to DS3) and after dust storm (ADS). Sugars (mono and disaccharides, sugar-alcohols and anhydro-sugars) were determined using GC-MS technique. It was observed that on the onset of dust-storm, total suspended particulate matter (TSPM, sum of all stages) concentration in DS1 sample increased by > 2.5 folds compared to that of BDS samples. Interestingly, fine particulate matter (sum of stages with cutoff size < 2.1 μm) loading in DS1 also increased by > 2.5 folds as compared to that of BDS samples. Sugars analyzed in DS1 coarse mode (sum of stages with cutoff size > 2.1 μm) samples showed a considerable increase ( 1.7-2.8 folds) compared to that of other samples. It was further observed that mono-saccharides, disaccharides and sugar-alcohols concentrations were enhanced in giant (> 9.0 μm) particles in DS1 samples as compared to other samples. On the other hand, anhydro-sugars comprised 13-27% of sugars in coarse mode particles and were mostly found in fine mode constituting 66-85% of sugars in all the sample types. Trehalose showed an enhanced ( 2-4 folds) concentration in DS1 aerosol samples in both coarse (62.80 ng/m3) and fine (8.57 ng/m3) mode. This increase in Trehalose content in both coarse and fine mode suggests their origin to the transported desert dust and supports their candidature as an organic tracer for desert dust entrainments. Further, levoglucosan to mannosan (L/M) ratios which have been used to predict the type of biomass burning influences on aerosols are found to be size dependent in these samples. These ratios are higher for fine mode particles, hence should be used with caution while interpreting the sources using this tool.
Photo series for quantifying natural forest residues: southern Cascades, northern Sierra Nevada
Kenneth S. Blonski; John L. Schramel
1981-01-01
A total of 56 photographs shows different levels of natural fuel loadings for selected size classes in seven forest types of the southern Cascade and northern Sierra-Nevada ranges. Data provided with each photo include size, weight, volumes, residue depths, and percent of ground coverage. Stand information includes sizes, weights, and volumes of the trees sampled for...
ERIC Educational Resources Information Center
Turgut, Sedat; Temur, Özlem Dogan
2017-01-01
In this research, the effects of using game in mathematics teaching process on academic achievement in Turkey were examined by metaanalysis method. For this purpose, the average effect size value and the average effect size values of the moderator variables (education level, the field of education, game type, implementation period and sample size)…
Synthesis and characterization of silicon nanorod on n-type porous silicon.
Behzad, Kasra; Mat Yunus, Wan Mahmood; Bahrami, Afarin; Kharazmi, Alireza; Soltani, Nayereh
2016-03-20
This work reports a new method for growing semiconductor nanorods on a porous silicon substrate. After preparation of n-type porous silicon samples, a thin layer of gold was deposited on them. Gold deposited samples were annealed at different temperatures. The structural, thermal, and optical properties of the samples were studied using a field emission scanning electron microscope (FESEM), photoacoustic spectroscopy, and photoluminescence spectroscopy, respectively. FESEM analysis revealed that silicon nanorods of different sizes grew on the annealed samples. Thermal behavior of the samples was studied using photoacoustic spectroscopy. Photoluminescence spectroscopy showed that the emission peaks were degraded by gold deposition and attenuated for all samples by annealing.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
NASA Astrophysics Data System (ADS)
Wilbourn, E.; Thornton, D.; Brooks, S. D.; Graff, J.
2016-12-01
The role of marine aerosols as ice nucleating particles is currently poorly understood. Despite growing interest, there are remarkably few ice nucleation measurements on representative marine samples. Here we present results of heterogeneous ice nucleation from laboratory studies and in-situ air and sea water samples collected during NAAMES (North Atlantic Aerosol and Marine Ecosystems Study). Thalassiosira weissflogii (CCMP 1051) was grown under controlled conditions in batch cultures and the ice nucleating activity depended on the growth phase of the cultures. Immersion freezing temperatures of the lab-grown diatoms were determined daily using a custom ice nucleation apparatus cooled at a set rate. Our results show that the age of the culture had a significant impact on ice nucleation temperature, with samples in stationary phase causing nucleation at -19.9 °C, approximately nine degrees warmer than the freezing temperature during exponential growth phase. Field samples gathered during the NAAMES II cruise in May 2016 were also tested for ice nucleating ability. Two types of samples were gathered. Firstly, whole cells were fractionated by size from surface seawater using a BD Biosciences Influx Cell Sorter (BD BS ISD). Secondly, aerosols were generated using the SeaSweep and subsequently size-selected using a PIXE Cascade Impactor. Samples were tested for the presence of ice nucleating particles (INP) using the technique described above. There were significant differences in the freezing temperature of the different samples; of the three sample types the lab-grown cultures tested during stationary phase froze at the warmest temperatures, followed by the SeaSweep samples (-25.6 °C) and the size-fractionated cell samples (-31.3 °C). Differences in ice nucleation ability may be due to size differences between the INP, differences in chemical composition of the sample, or some combination of these two factors. Results will be presented and atmospheric implications discussed.
Measuring Endocrine-active Chemicals at ng/L Concentrations in Water
Analytical chemistry challenges for supporting aquatic toxicity research and risk assessment are many: need for low detection limits, complex sample matrices, small sample size, and equipment limitations to name a few. Certain types of potent endocrine disrupting chemicals (EDCs)...
7 CFR 42.102 - Definitions, general.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS Definitions § 42.102 Definitions, general. For the... plan consists of first and total sample sizes with associated acceptance and rejection criteria. The... collection of filled food containers of the same size, type, and style. The term shall mean “inspection lot...
Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold
2016-04-25
To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Fattah-alhosseini, Arash; Ansari, Ali Reza; Mazaheri, Yousef; Karimi, Mohsen
2017-02-01
In this study, the electrochemical behavior of commercial pure titanium with both coarse-grained (annealed sample with the average grain size of about 45 µm) and nano-grained microstructure was compared by potentiodynamic polarization, electrochemical impedance spectroscopy (EIS), and Mott-Schottky analysis. Nano-grained Ti, which typically has a grain size of about 90 nm, is successfully made by six-cycle accumulative roll-bonding process at room temperature. Potentiodynamic polarization plots and impedance measurements revealed that as a result of grain refinement, the passive behavior of the nano-grained sample was improved compared to that of annealed pure Ti in H2SO4 solutions. Mott-Schottky analysis indicated that the passive films behaved as n-type semiconductors in H2SO4 solutions and grain refinement did not change the semiconductor type of passive films. Also, Mott-Schottky analysis showed that the donor densities decreased as the grain size of the samples reduced. Finally, all electrochemical tests showed that the electrochemical behavior of the nano-grained sample was improved compared to that of annealed pure Ti, mainly due to the formation of thicker and less defective oxide film.
Cui, Zaixu; Gong, Gaolang
2018-06-02
Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.
AEROSOL SAMPLING AND ANALYSIS, PHOENIX, ARIZONA
An atmospheric sampling program was carried out in the greater Phoenix, Arizona metropolitan area in November, 1975. Objectives of the study were to measure aerosol mass flux through Phoenix and to characterize the aerosol according to particle type and size. The ultimate goal of...
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique
2017-01-01
Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Significant Effect of Pore Sizes on Energy Storage in Nanoporous Carbon Supercapacitors.
Young, Christine; Lin, Jianjian; Wang, Jie; Ding, Bing; Zhang, Xiaogang; Alshehri, Saad M; Ahamad, Tansir; Salunkhe, Rahul R; Hossain, Shahriar A; Khan, Junayet Hossain; Ide, Yusuke; Kim, Jeonghun; Henzie, Joel; Wu, Kevin C-W; Kobayashi, Naoya; Yamauchi, Yusuke
2018-04-20
Mesoporous carbon can be synthesized with good control of surface area, pore-size distribution, and porous architecture. Although the relationship between porosity and supercapacitor performance is well known, there are no thorough reports that compare the performance of numerous types of carbon samples side by side. In this manuscript, we describe the performance of 13 porous carbon samples in supercapacitor devices. We suggest that there is a "critical pore size" at which guest molecules can pass through the pores effectively. In this context, the specific surface area (SSA) and pore-size distribution (PSD) are used to show the point at which the pore size crosses the threshold of critical size. These measurements provide a guide for the development of new kinds of carbon materials for supercapacitor devices. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quantitative Reflectance Spectra of Solid Powders as a Function of Particle Size
Myers, Tanya L.; Brauer, Carolyn S.; Su, Yin-Fong; ...
2015-05-19
We have recently developed vetted methods for obtaining quantitative infrared directional-hemispherical reflectance spectra using a commercial integrating sphere. In this paper, the effects of particle size on the spectral properties are analyzed for several samples such as ammonium sulfate, calcium carbonate, and sodium sulfate as well as one organic compound, lactose. We prepared multiple size fractions for each sample and confirmed the mean sizes using optical microscopy. Most species displayed a wide range of spectral behavior depending on the mean particle size. General trends of reflectance vs. particle size are observed such as increased albedo for smaller particles: for mostmore » wavelengths, the reflectivity drops with increased size, sometimes displaying a factor of 4 or more drop in reflectivity along with a loss of spectral contrast. In the longwave infrared, several species with symmetric anions or cations exhibited reststrahlen features whose amplitude was nearly invariant with particle size, at least for intermediate- and large-sized sample fractions; that is, > ~150 microns. Trends of other types of bands (Christiansen minima, transparency features) are also investigated as well as quantitative analysis of the observed relationship between reflectance vs. particle diameter.« less
Quantitative Reflectance Spectra of Solid Powders as a Function of Particle Size
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, Tanya L.; Brauer, Carolyn S.; Su, Yin-Fong
We have recently developed vetted methods for obtaining quantitative infrared directional-hemispherical reflectance spectra using a commercial integrating sphere. In this paper, the effects of particle size on the spectral properties are analyzed for several samples such as ammonium sulfate, calcium carbonate, and sodium sulfate as well as one organic compound, lactose. We prepared multiple size fractions for each sample and confirmed the mean sizes using optical microscopy. Most species displayed a wide range of spectral behavior depending on the mean particle size. General trends of reflectance vs. particle size are observed such as increased albedo for smaller particles: for mostmore » wavelengths, the reflectivity drops with increased size, sometimes displaying a factor of 4 or more drop in reflectivity along with a loss of spectral contrast. In the longwave infrared, several species with symmetric anions or cations exhibited reststrahlen features whose amplitude was nearly invariant with particle size, at least for intermediate- and large-sized sample fractions; that is, > ~150 microns. Trends of other types of bands (Christiansen minima, transparency features) are also investigated as well as quantitative analysis of the observed relationship between reflectance vs. particle diameter.« less
Self-objectification and disordered eating: A meta-analysis.
Schaefer, Lauren M; Thompson, J Kevin
2018-06-01
Objectification theory posits that self-objectification increases risk for disordered eating. The current study sought to examine the relationship between self-objectification and disordered eating using meta-analytic techniques. Data from 53 cross-sectional studies (73 effect sizes) revealed a significant moderate positive overall effect (r = .39), which was moderated by gender, ethnicity, sexual orientation, and measurement of self-objectification. Specifically, larger effect sizes were associated with female samples and the Objectified Body Consciousness Scale. Effect sizes were smaller among heterosexual men and African American samples. Age, body mass index, country of origin, measurement of disordered eating, sample type and publication type were not significant moderators. Overall, results from the first meta-analysis to examine the relationship between self-objectification and disordered eating provide support for one of the major tenets of objectification theory and suggest that self-objectification may be a meaningful target in eating disorder interventions, though further work is needed to establish temporal and causal relationships. Findings highlight current gaps in the literature (e.g., limited representation of males, and ethnic and sexual minorities) with implications for guiding future research. © 2018 Wiley Periodicals, Inc.
Sample size determination for GEE analyses of stepped wedge cluster randomized trials.
Li, Fan; Turner, Elizabeth L; Preisser, John S
2018-06-19
In stepped wedge cluster randomized trials, intact clusters of individuals switch from control to intervention from a randomly assigned period onwards. Such trials are becoming increasingly popular in health services research. When a closed cohort is recruited from each cluster for longitudinal follow-up, proper sample size calculation should account for three distinct types of intraclass correlations: the within-period, the inter-period, and the within-individual correlations. Setting the latter two correlation parameters to be equal accommodates cross-sectional designs. We propose sample size procedures for continuous and binary responses within the framework of generalized estimating equations that employ a block exchangeable within-cluster correlation structure defined from the distinct correlation types. For continuous responses, we show that the intraclass correlations affect power only through two eigenvalues of the correlation matrix. We demonstrate that analytical power agrees well with simulated power for as few as eight clusters, when data are analyzed using bias-corrected estimating equations for the correlation parameters concurrently with a bias-corrected sandwich variance estimator. © 2018, The International Biometric Society.
Early-type galaxies: mass-size relation at z ˜ 1.3 for different environments
NASA Astrophysics Data System (ADS)
Raichoor, A.; Mei, S.; Stanford, S. A.; Holden, B. P.; Nakata, F.; Rosati, P.; Shankar, F.; Tanaka, M.; Ford, H.; Huertas-Company, M.; Illingworth, G.; Kodama, T.; Postman, M.; Rettura, A.; Blakeslee, J. P.; Demarco, R.; Jee, M. J.; White, R. L.
2011-12-01
We combine multi-wavelength data of the Lynx superstructure and GOODS/CDF-S to build a sample of 75 visually selected early-type galaxies (ETGs), spanning different environments (cluster/group/field) at z ˜ 1.3. By estimating their mass, age (SED fitting, with a careful attention to the stellar population model used) and size, we are able to probe the dependence on the environment of the mass-size relation. We find that, for ETGs with 10^{10} < M / M_⊙ < 10^{11.5}, (1) the mass-size relation in the field did not evolve overall from z ˜ 1.3 to present; (2) the mass-size relation in cluster/group environments at z ˜ 1.3 lies at smaller sizes than the local mass-size relation (R_{e,z ˜ 1.3}/R_{e,z = 0} ˜ 0.6-0.8).
Song, Rui; Kosorok, Michael R.; Cai, Jianwen
2009-01-01
Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.
Gajewski, Byron J; Mayo, Matthew S
2006-08-15
A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.
Zhang, Zhen; Wang, Bao-Jie; Guan, Hong-Yu; Pang, Hao; Xuan, Jin-Feng
2009-11-01
Reducing amplicon sizes has become a major strategy for analyzing degraded DNA typical of forensic samples. However, amplicon sizes in current mini-short tandem repeat-polymerase chain reaction (PCR) and mini-sequencing assays are still not suitable for analysis of severely degraded DNA. In this study, we present a multiplex typing method that couples ligase detection reaction with PCR that can be used to identify single nucleotide polymorphisms and small-scale insertion/deletions in a sample of severely fragmented DNA. This method adopts thermostable ligation for allele discrimination and subsequent PCR for signal enhancement. In this study, four polymorphic loci were used to assess the ability of this technique to discriminate alleles in an artificially degraded sample of DNA with fragment sizes <100 bp. Our results showed clear allelic discrimination of single or multiple loci, suggesting that this method might aid in the analysis of extremely degraded samples in which allelic drop out of larger fragments is observed.
2011-01-01
To obtain approval for the use vertebrate animals in research, an investigator must assure an ethics committee that the proposed number of animals is the minimum necessary to achieve a scientific goal. How does an investigator make that assurance? A power analysis is most accurate when the outcome is known before the study, which it rarely is. A ‘pilot study’ is appropriate only when the number of animals used is a tiny fraction of the numbers that will be invested in the main study because the data for the pilot animals cannot legitimately be used again in the main study without increasing the rate of type I errors (false discovery). Traditional significance testing requires the investigator to determine the final sample size before any data are collected and then to delay analysis of any of the data until all of the data are final. An investigator often learns at that point either that the sample size was larger than necessary or too small to achieve significance. Subjects cannot be added at this point in the study without increasing type I errors. In addition, journal reviewers may require more replications in quantitative studies than are truly necessary. Sequential stopping rules used with traditional significance tests allow incremental accumulation of data on a biomedical research problem so that significance, replicability, and use of a minimal number of animals can be assured without increasing type I errors. PMID:21838970
Kim, Gibaek; Kwak, Jihyun; Kim, Ki-Rak; Lee, Heesung; Kim, Kyoung-Woong; Yang, Hyeon; Park, Kihong
2013-12-15
A laser induced breakdown spectroscopy (LIBS) coupled with the chemometric method was applied to rapidly discriminate between soils contaminated with heavy metals or oils and clean soils. The effects of the water contents and grain sizes of soil samples on LIBS emissions were also investigated. The LIBS emission lines decreased by 59-75% when the water content increased from 1.2% to 7.8%, and soil samples with a grain size of 75 μm displayed higher LIBS emission lines with lower relative standard deviations than those with a 2mm grain size. The water content was found to have a more pronounced effect on the LIBS emission lines than the grain size. Pelletizing and sieving were conducted for all samples collected from abandoned mining areas and military camp to have similar water contents and grain sizes before being analyzed by the LIBS with the chemometric analysis. The data show that three types of soil samples were clearly discerned by using the first three principal components from the spectral data of soil samples. A blind test was conducted with a 100% correction rate for soil samples contaminated with heavy metals and oil residues. Copyright © 2013 Elsevier B.V. All rights reserved.
Zhelev, Zhivko; Popgeorgiev, Georgi; Ivanov, Ivan; Boyadzhiev, Peter
2017-07-01
The article presents the basic erythrocyte-metric parameters: cell length (EL) and width (EW), EL/EW, erythrocyte size (ES), nucleus length (NL) and width (NW), NL/NW, nucleus size (NS) and nucleocytoplasmic ratio (NS/ES) in the wild populations of marsh frogs Pelophylax ridibundus from five water bodies in Southern Bulgaria (two rivers and three reservoirs) with different degrees and types of anthropogenic pollution (less disrupted water basins, domestic sewage pollution and heavy metal pollution). The changes in erythrocyte-metric parameters depend on concentrations and types of toxicant and, to a lesser extent, on the type of water basin. We found that when P. ridibundus populations live in conditions of domestic sewage pollution, EL, EW and ES increase in comparison with the control samples, with regard to an elongated elliptical cell shape. Simultaneously, NL, NW and NS did not undergo any significant changes when compared with the control samples. The nuclei had elliptical shape. In the populations from the water basins with heavy metal pollution, EL, EW, ES, NL, NW and NS decreased. The cells and nuclei had a circular shape. NS/ES decreased when compared with the control sample, regardless of the type of toxicants.
Moser, Barry Kurt; Halabi, Susan
2013-01-01
In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described. PMID:25530661
Predicting stellar angular diameters from V, IC, H and K photometry
NASA Astrophysics Data System (ADS)
Adams, Arthur D.; Boyajian, Tabetha S.; von Braun, Kaspar
2018-01-01
Determining the physical properties of microlensing events depends on having accurate angular sizes of the source star. Using long baseline optical interferometry, we are able to measure the angular sizes of nearby stars with uncertainties ≤2 per cent. We present empirically derived relations of angular diameters which are calibrated using both a sample of dwarfs/subgiants and a sample of giant stars. These relations are functions of five colour indices in the visible and near-infrared, and have uncertainties of 1.8-6.5 per cent depending on the colour used. We find that a combined sample of both main-sequence and evolved stars of A-K spectral types is well fitted by a single relation for each colour considered. We find that in the colours considered, metallicity does not play a statistically significant role in predicting stellar size, leading to a means of predicting observed sizes of stars from colour alone.
High-concentration zeta potential measurements using light-scattering techniques
Kaszuba, Michael; Corbett, Jason; Watson, Fraser Mcneil; Jones, Andrew
2010-01-01
Zeta potential is the key parameter that controls electrostatic interactions in particle dispersions. Laser Doppler electrophoresis is an accepted method for the measurement of particle electrophoretic mobility and hence zeta potential of dispersions of colloidal size materials. Traditionally, samples measured by this technique have to be optically transparent. Therefore, depending upon the size and optical properties of the particles, many samples will be too concentrated and will require dilution. The ability to measure samples at or close to their neat concentration would be desirable as it would minimize any changes in the zeta potential of the sample owing to dilution. However, the ability to measure turbid samples using light-scattering techniques presents a number of challenges. This paper discusses electrophoretic mobility measurements made on turbid samples at high concentration using a novel cell with reduced path length. Results are presented on two different sample types, titanium dioxide and a polyurethane dispersion, as a function of sample concentration. For both of the sample types studied, the electrophoretic mobility results show a gradual decrease as the sample concentration increases and the possible reasons for these observations are discussed. Further, a comparison of the data against theoretical models is presented and discussed. Conclusions and recommendations are made from the zeta potential values obtained at high concentrations. PMID:20732896
Investigation on the structural characterization of pulsed p-type porous silicon
NASA Astrophysics Data System (ADS)
Wahab, N. H. Abd; Rahim, A. F. Abd; Mahmood, A.; Yusof, Y.
2017-08-01
P-type Porous silicon (PS) was sucessfully formed by using an electrochemical pulse etching (PC) and conventional direct current (DC) etching techniques. The PS was etched in the Hydrofluoric (HF) based solution at a current density of J = 10 mA/cm2 for 30 minutes from a crystalline silicon wafer with (100) orientation. For the PC process, the current was supplied through a pulse generator with 14 ms cycle time (T) with 10 ms on time (Ton) and pause time (Toff) of 4 ms respectively. FESEM, EDX, AFM, and XRD have been used to characterize the morphological properties of the PS. FESEM images showed that pulse PS (PPC) sample produces more uniform circular structures with estimated average pore sizes of 42.14 nm compared to DC porous (PDC) sample with estimated average size of 16.37nm respectively. The EDX spectrum for both samples showed higher Si content with minimal presence of oxide.
Assessing Disfluencies in School-Age Children Who Stutter: How Much Speech Is Enough?
ERIC Educational Resources Information Center
Gregg, Brent A.; Sawyer, Jean
2015-01-01
The question of what size speech sample is sufficient to accurately identify stuttering and its myriad characteristics is a valid one. Short samples have a risk of over- or underrepresenting disfluency types or characteristics. In recent years, there has been a trend toward using shorter samples because they are less time-consuming for…
3D-HST + CANDELS: the Evolution of the Galaxy Size-mass Distribution Since Z=3
NASA Technical Reports Server (NTRS)
VanDerWel, A.; Franx, M.; vanDokkum, P. G.; Skelton, R. E.; Momcheva, I. G.; Whitaker, K. E.; Brammer, G. B.; Bell, E. F.; Rix, H.-W.; Wuyts, S.;
2014-01-01
Spectroscopic and photometric redshifts, stellar mass estimates, and rest-frame colors from the 3D-HST survey are combined with structural parameter measurements from CANDELS imaging to determine the galaxy size-mass distribution over the redshift (z) range 0 < z < 3. Separating early- and late-type galaxies on the basis of star-formation activity, we confirm that early-type galaxies are on average smaller than late-type galaxies at all redshifts, and find a significantly different rate of average size evolution at fixed galaxy mass, with fast evolution for the early-type population, effective radius is in proportion to (1 + z) (sup -1.48), and moderate evolution for the late-type population, effective radius is in proportion to (1 + z) (sup -0.75). The large sample size and dynamic range in both galaxy mass and redshift, in combination with the high fidelity of our measurements due to the extensive use of spectroscopic data, not only fortify previous results, but also enable us to probe beyond simple average galaxy size measurements. At all redshifts the slope of the size-mass relation is shallow, effective radius in proportion to mass of a black hole (sup 0.22), for late-type galaxies with stellar mass > 3 x 10 (sup 9) solar masses, and steep, effective radius in proportion to mass of a black hole (sup 0.75), for early-type galaxies with stellar mass > 2 x 10 (sup 10) solar masses. The intrinsic scatter is approximately or less than 0.2 decimal exponents for all galaxy types and redshifts. For late-type galaxies, the logarithmic size distribution is not symmetric, but skewed toward small sizes: at all redshifts and masses a tail of small late-type galaxies exists that overlaps in size with the early-type galaxy population. The number density of massive (approximately 10 (sup 11) solar masses), compact (effective radius less than 2 kiloparsecs) early-type galaxies increases from z = 3 to z = 1.5 - 2 and then strongly decreases at later cosmic times.
NASA Astrophysics Data System (ADS)
Lintz, L.; Werts, S. P.
2014-12-01
The Ninety-Six National Historic Site is located in Greenwood County, SC. Recent geologic mapping of this area has revealed differences in soil properties over short distances within the park. We studied the chemistry of the clay minerals found within the soils to see if there was a correlation between the amounts of soil organic carbon contained in the soil and particle size in individual soil horizons. Three different vegetation areas, including an old field, a deciduous forest, and a pine forest were selected to see what influence vegetation type had on the clay chemistry and carbon levels as well. Four samples containing the O, A, and B horizons were taken from each location and we studied the carbon and nitrogen content using an elemental analyzer, particle size using a Laser Diffraction Particle Size Analyzer, and clay mineralogy with powder X-ray diffraction of each soil sample. Samples from the old field and pine forest gave an overall negative correlation between carbon content and clay percentage, which is against the normal trend for Southern Piedmont Ultisols. The deciduous forest samples gave no correlation at all between its carbon content and clay percentage. Together, all three locations show the same negative relationship, while once separated into vegetation type and A and B horizons it shows even more abnormal relationships of negative while several show no correlation (R2= 0.007403- 0.56268). Using powder XRD, we ran clay samples from each A and B horizon for the clay mineralogy. All three vegetation areas had the same results of containing quartz, kaolinite, and Fe oxides, therefore, clay chemistry is not a reason behind the abnormal trend of a negative correlation between average carbon content and clay percentage. Considering that all three locations have the same climate, topography, and parent material of metagranite, it could be reasonable to assume these results are a factor of environmental and biological influences rather than clay type.
Denagamage, Thomas N; Patterson, Paul; Wallner-Pendleton, Eva; Trampel, Darrell; Shariat, Nikki; Dudley, Edward G; Jayarao, Bhushan M; Kariyawasam, Subhashinie
2016-11-01
The Pennsylvania Egg Quality Assurance Program (EQAP) provided the framework for Salmonella Enteritidis (SE) control programs, including the Food and Drug Administration (FDA) mandated Final Egg Rule, for commercial layer facilities throughout the United States. Although flocks with ≥3000 birds must comply with the FDA Final Egg Rule, smaller flocks are exempted from the rule. As a result, eggs produced by small layer flocks may pose a greater public health risk than those from larger flocks. It is also unknown if the EQAPs developed with large flocks in mind are suitable for small- and medium-sized flocks. Therefore, a study was performed to evaluate the effectiveness of best management practices included in EQAPs in reducing SE contamination of small- and medium-sized flocks by longitudinal monitoring of their environment and eggs. A total of 59 medium-sized (3000 to 50,000 birds) and small-sized (<3000 birds) flocks from two major layer production states of the United States were enrolled and monitored for SE by culturing different types of environmental samples and shell eggs for two consecutive flock cycles. Isolated SE was characterized by phage typing, pulsed-field gel electrophoresis (PFGE), and clustered regularly interspaced short palindromic repeats-multi-virulence-locus sequence typing (CRISPR-MVLST). Fifty-four Salmonella isolates belonging to 17 serovars, 22 of which were SE, were isolated from multiple sample types. Typing revealed that SE isolates belonged to three phage types (PTs), three PFGE fingerprint patterns, and three CRISPR-MVLST SE Sequence Types (ESTs). The PT8 and JEGX01.0004 PFGE pattern, the most predominant SE types associated with foodborne illness in the United States, were represented by a majority (91%) of SE. Of the three ESTs observed, 85% SE were typed as EST4. The proportion of SE-positive hen house environment during flock cycle 2 was significantly less than the flock cycle 1, demonstrating that current EQAP practices were effective in reducing SE contamination of medium and small layer flocks.
Duran, Tinka; Stimpson, Jim P.; Smith, Corey
2013-01-01
Introduction Population-based data are essential for quantifying the problems and measuring the progress made by comprehensive cancer control programs. However, cancer information specific to the American Indian/Alaska Native (AI/AN) population is not readily available. We identified major population-based surveys conducted in the United States that contain questions related to cancer, documented the AI/AN sample size in these surveys, and identified gaps in the types of cancer-related information these surveys collect. Methods We conducted an Internet query of US Department of Health and Human Services agency websites and a Medline search to identify population-based surveys conducted in the United States from 1960 through 2010 that contained information about cancer. We used a data extraction form to collect information about the purpose, sample size, data collection methods, and type of information covered in the surveys. Results Seventeen survey sources met the inclusion criteria. Information on access to and use of cancer treatment, follow-up care, and barriers to receiving timely and quality care was not consistently collected. Estimates specific to the AI/AN population were often lacking because of inadequate AI/AN sample size. For example, 9 national surveys reviewed reported an AI/AN sample size smaller than 500, and 10 had an AI/AN sample percentage less than 1.5%. Conclusion Continued efforts are needed to increase the overall number of AI/AN participants in these surveys, improve the quality of information on racial/ethnic background, and collect more information on treatment and survivorship. PMID:23517582
Méndez-Rebolledo, Guillermo; Gatica-Rojas, Valeska; Torres-Cueco, Rafael; Albornoz-Verdugo, María; Guzmán-Muñoz, Eduardo
2017-01-01
Graded motor imagery (GMI) and mirror therapy (MT) is thought to improve pain in patients with complex regional pain syndrome (CRPS) types 1 and 2. However, the evidence is limited and analysis are not independent between types of CRPS. The purpose of this review was to analyze the effects of GMI and MT on pain in independent groups of patients with CRPS types 1 and 2. Searches for literature published between 1990 and 2016 were conducted in databases. Randomized controlled trials that compared GMI or MT with other treatments for CRPS types 1 and 2 were included. Six articles met the inclusion criteria and were classified from moderate to high quality. The total sample was composed of 171 participants with CRPS type 1. Three studies presented GMI with 3 components and three studies only used the MT. The studies were heterogeneous in terms of sample size and the disorders that triggered CRPS type 1. There were no trials that included participants with CRPS type 2. GMI and MT can improve pain in patients with CRPS type 1; however, there is not sufficient evidence to recommend these therapies over other treatments given the small size and heterogeneity of the studied population.
Size-selective separation of submicron particles in suspensions with ultrasonic atomization.
Nii, Susumu; Oka, Naoyoshi
2014-11-01
Aqueous suspensions containing silica or polystyrene latex were ultrasonically atomized for separating particles of a specific size. With the help of a fog involving fine liquid droplets with a narrow size distribution, submicron particles in a limited size-range were successfully separated from suspensions. Performance of the separation was characterized by analyzing the size and the concentration of collected particles with a high resolution method. Irradiation of 2.4MHz ultrasound to sample suspensions allowed the separation of particles of specific size from 90 to 320nm without regarding the type of material. Addition of a small amount of nonionic surfactant, PONPE20 to SiO2 suspensions enhanced the collection of finer particles, and achieved a remarkable increase in the number of collected particles. Degassing of the sample suspension resulted in eliminating the separation performance. Dissolved air in suspensions plays an important role in this separation. Copyright © 2014 Elsevier B.V. All rights reserved.
Jian, Yu-Tao; Yang, Yue; Tian, Tian; Stanford, Clark; Zhang, Xin-Ping; Zhao, Ke
2015-01-01
Five types of porous Nickel-Titanium (NiTi) alloy samples of different porosities and pore sizes were fabricated. According to compressive and fracture strengths, three groups of porous NiTi alloy samples underwent further cytocompatibility experiments. Porous NiTi alloys exhibited a lower Young’s modulus (2.0 GPa ~ 0.8 GPa). Both compressive strength (108.8 MPa ~ 56.2 MPa) and fracture strength (64.6 MPa ~ 41.6 MPa) decreased gradually with increasing mean pore size (MPS). Cells grew and spread well on all porous NiTi alloy samples. Cells attached more strongly on control group and blank group than on all porous NiTi alloy samples (p < 0.05). Cell adhesion on porous NiTi alloys was correlated negatively to MPS (277.2 μm ~ 566.5 μm; p < 0.05). More cells proliferated on control group and blank group than on all porous NiTi alloy samples (p < 0.05). Cellular ALP activity on all porous NiTi alloy samples was higher than on control group and blank group (p < 0.05). The porous NiTi alloys with optimized pore size could be a potential orthopedic material. PMID:26047515
Maurer, Willi; Jones, Byron; Chen, Ying
2018-05-10
In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al. Copyright © 2018 John Wiley & Sons, Ltd.
Dumas-Mallet, Estelle; Button, Katherine; Boraud, Thomas; Munafo, Marcus; Gonon, François
2016-01-01
There are growing concerns about effect size inflation and replication validity of association studies, but few observational investigations have explored the extent of these problems. Using meta-analyses to measure the reliability of initial studies and explore whether this varies across biomedical domains and study types (cognitive/behavioral, brain imaging, genetic and "others"). We analyzed 663 meta-analyses describing associations between markers or risk factors and 12 pathologies within three biomedical domains (psychiatry, neurology and four somatic diseases). We collected the effect size, sample size, publication year and Impact Factor of initial studies, largest studies (i.e., with the largest sample size) and the corresponding meta-analyses. Initial studies were considered as replicated if they were in nominal agreement with meta-analyses and if their effect size inflation was below 100%. Nominal agreement between initial studies and meta-analyses regarding the presence of a significant effect was not better than chance in psychiatry, whereas it was somewhat better in neurology and somatic diseases. Whereas effect sizes reported by largest studies and meta-analyses were similar, most of those reported by initial studies were inflated. Among the 256 initial studies reporting a significant effect (p<0.05) and paired with significant meta-analyses, 97 effect sizes were inflated by more than 100%. Nominal agreement and effect size inflation varied with the biomedical domain and study type. Indeed, the replication rate of initial studies reporting a significant effect ranged from 6.3% for genetic studies in psychiatry to 86.4% for cognitive/behavioral studies. Comparison between eight subgroups shows that replication rate decreases with sample size and "true" effect size. We observed no evidence of association between replication rate and publication year or Impact Factor. The differences in reliability between biological psychiatry, neurology and somatic diseases suggest that there is room for improvement, at least in some subdomains.
Dumas-Mallet, Estelle; Button, Katherine; Boraud, Thomas; Munafo, Marcus; Gonon, François
2016-01-01
Context There are growing concerns about effect size inflation and replication validity of association studies, but few observational investigations have explored the extent of these problems. Objective Using meta-analyses to measure the reliability of initial studies and explore whether this varies across biomedical domains and study types (cognitive/behavioral, brain imaging, genetic and “others”). Methods We analyzed 663 meta-analyses describing associations between markers or risk factors and 12 pathologies within three biomedical domains (psychiatry, neurology and four somatic diseases). We collected the effect size, sample size, publication year and Impact Factor of initial studies, largest studies (i.e., with the largest sample size) and the corresponding meta-analyses. Initial studies were considered as replicated if they were in nominal agreement with meta-analyses and if their effect size inflation was below 100%. Results Nominal agreement between initial studies and meta-analyses regarding the presence of a significant effect was not better than chance in psychiatry, whereas it was somewhat better in neurology and somatic diseases. Whereas effect sizes reported by largest studies and meta-analyses were similar, most of those reported by initial studies were inflated. Among the 256 initial studies reporting a significant effect (p<0.05) and paired with significant meta-analyses, 97 effect sizes were inflated by more than 100%. Nominal agreement and effect size inflation varied with the biomedical domain and study type. Indeed, the replication rate of initial studies reporting a significant effect ranged from 6.3% for genetic studies in psychiatry to 86.4% for cognitive/behavioral studies. Comparison between eight subgroups shows that replication rate decreases with sample size and “true” effect size. We observed no evidence of association between replication rate and publication year or Impact Factor. Conclusion The differences in reliability between biological psychiatry, neurology and somatic diseases suggest that there is room for improvement, at least in some subdomains. PMID:27336301
Sampling design for the 1980 commercial and multifamily residential building survey
NASA Astrophysics Data System (ADS)
Bowen, W. M.; Olsen, A. R.; Nieves, A. L.
1981-06-01
The extent to which new building design practices comply with the proposed 1980 energy budget levels for commercial and multifamily residential building designs (DEB-80) can be assessed by: (1) identifying small number of building types which account for the majority of commercial buildings constructed in the U.S.A.; (2) conducting a separate survey for each building type; and (3) including only buildings designed during 1980. For each building, the design energy consumption (DEC-80) will be determined by the DOE2.1 computer program. The quantity X = (DEC-80 - DEB-80). These X quantities can then be used to compute sample statistics. Inferences about nationwide compliance with DEB-80 may then be made for each building type. Details of the population, sampling frame, stratification, sample size, and implementation of the sampling plan are provided.
Item Discrimination and Type I Error in the Detection of Differential Item Functioning
ERIC Educational Resources Information Center
Li, Yanju; Brooks, Gordon P.; Johanson, George A.
2012-01-01
In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel-Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the…
NASA Astrophysics Data System (ADS)
Park, Ki-Chan; Madavali, Babu; Kim, Eun-Bin; Koo, Kyung-Wan; Hong, Soon-Jik
2017-05-01
p-Type Bi2Te3 + 75% Sb2Te3 based thermoelectric materials were fabricated via gas atomization and the hot extrusion process. The gas atomized powder showed a clean surface with a spherical shape, and expanded in a wide particle size distribution (average particle size 50 μm). The phase of the fabricated extruded and R-extruded bars was identified using x-ray diffraction. The relative densities of both the extruded and R-extruded samples were measured by Archimedes principle with ˜98% relative density. The R-extruded bar exhibited finer grain microstructure than that of single extrusion process, which was attributed to a recrystallization mechanism during the fabrication. The R-extruded sample showed improved Vickers hardness compared to the extruded sample due to its fine grain microstructure. The electrical conductivity improved for the extruded sample whereas the Seebeck coefficient decreases due to its high carrier concentration. The peak power factor, ˜4.26 × 10-3 w/mK2 was obtained for the single extrusion sample, which is higher than the R-extrusion sample owing to its high electrical properties.
Demonstration of Multi- and Single-Reader Sample Size Program for Diagnostic Studies software.
Hillis, Stephen L; Schartz, Kevin M
2015-02-01
The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies , written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.
Radlinski, A.P.; Mastalerz, Maria; Hinde, A.L.; Hainbuchner, M.; Rauch, H.; Baron, M.; Lin, J.S.; Fan, L.; Thiyagarajan, P.
2004-01-01
This paper discusses the applicability of small angle X-ray scattering (SAXS) and small angle neutron scattering (SANS) techniques for determining the porosity, pore size distribution and internal specific surface area in coals. The method is noninvasive, fast, inexpensive and does not require complex sample preparation. It uses coal grains of about 0.8 mm size mounted in standard pellets as used for petrographic studies. Assuming spherical pore geometry, the scattering data are converted into the pore size distribution in the size range 1 nm (10 A??) to 20 ??m (200,000 A??) in diameter, accounting for both open and closed pores. FTIR as well as SAXS and SANS data for seven samples of oriented whole coals and corresponding pellets with vitrinite reflectance (Ro) values in the range 0.55% to 5.15% are presented and analyzed. Our results demonstrate that pellets adequately represent the average microstructure of coal samples. The scattering data have been used to calculate the maximum surface area available for methane adsorption. Total porosity as percentage of sample volume is calculated and compared with worldwide trends. By demonstrating the applicability of SAXS and SANS techniques to determine the porosity, pore size distribution and surface area in coals, we provide a new and efficient tool, which can be used for any type of coal sample, from a thin slice to a representative sample of a thick seam. ?? 2004 Elsevier B.V. All rights reserved.
Determination of hydrogen abundance in selected lunar soils
NASA Technical Reports Server (NTRS)
Bustin, Roberta
1987-01-01
Hydrogen was implanted in lunar soil through solar wind activity. In order to determine the feasibility of utilizing this solar wind hydrogen, it is necessary to know not only hydrogen abundances in bulk soils from a variety of locations but also the distribution of hydrogen within a given soil. Hydrogen distribution in bulk soils, grain size separates, mineral types, and core samples was investigated. Hydrogen was found in all samples studied. The amount varied considerably, depending on soil maturity, mineral types present, grain size distribution, and depth. Hydrogen implantation is definitely a surface phenomenon. However, as constructional particles are formed, previously exposed surfaces become embedded within particles, causing an enrichment of hydrogen in these species. In view of possibly extracting the hydrogen for use on the lunar surface, it is encouraging to know that hydrogen is present to a considerable depth and not only in the upper few millimeters. Based on these preliminary studies, extraction of solar wind hydrogen from lunar soil appears feasible, particulary if some kind of grain size separation is possible.
The Role of Remote Sensing in Assessing Forest Biomass in Appalachian South Carolina
NASA Technical Reports Server (NTRS)
Shain, W.; Nix, L.
1982-01-01
Information is presented on the use of color infrared aerial photographs and ground sampling methods to quantify standing forest biomass in Appalachian South Carolina. Local tree biomass equations are given and subsequent evaluation of stand density and size classes using remote sensing methods is presented. Methods of terrain analysis, environmental hazard rating, and subsequent determination of accessibility of forest biomass are discussed. Computer-based statistical analyses are used to expand individual cover-type specific ground sample data to area-wide cover type inventory figures based on aerial photographic interpretation and area measurement. Forest biomass data are presented for the study area in terms of discriminant size classes, merchantability limits, accessibility (as related to terrain and yield/harvest constraints), and potential environmental impact of harvest.
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
ERIC Educational Resources Information Center
Ahmad Salfi, Naseer; Saeed, Muhammad
2007-01-01
Purpose: This paper seeks to determine the relationship among school size, school culture and students' achievement at secondary level in Pakistan. Design/methodology/approach: The study was descriptive (survey type). It was conducted on a sample of 90 secondary school head teachers and 540 primary, elementary and high school teachers working in…
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358
NASA Astrophysics Data System (ADS)
Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian
2016-04-01
Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.
Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla
2016-01-01
Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897
Re-estimating sample size in cluster randomised trials with active recruitment within clusters.
van Schie, S; Moerbeek, M
2014-08-30
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.
A Visual Evaluation Study of Graph Sampling Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Wong, Pak C.
2017-01-29
We evaluate a dozen prevailing graph-sampling techniques with an ultimate goal to better visualize and understand big and complex graphs that exhibit different properties and structures. The evaluation uses eight benchmark datasets with four different graph types collected from Stanford Network Analysis Platform and NetworkX to give a comprehensive comparison of various types of graphs. The study provides a practical guideline for visualizing big graphs of different sizes and structures. The paper discusses results and important observations from the study.
Internal pilots for a class of linear mixed models with Gaussian and compound symmetric data
Gurka, Matthew J.; Coffey, Christopher S.; Muller, Keith E.
2015-01-01
SUMMARY An internal pilot design uses interim sample size analysis, without interim data analysis, to adjust the final number of observations. The approach helps to choose a sample size sufficiently large (to achieve the statistical power desired), but not too large (which would waste money and time). We report on recent research in cerebral vascular tortuosity (curvature in three dimensions) which would benefit greatly from internal pilots due to uncertainty in the parameters of the covariance matrix used for study planning. Unfortunately, observations correlated across the four regions of the brain and small sample sizes preclude using existing methods. However, as in a wide range of medical imaging studies, tortuosity data have no missing or mistimed data, a factorial within-subject design, the same between-subject design for all responses, and a Gaussian distribution with compound symmetry. For such restricted models, we extend exact, small sample univariate methods for internal pilots to linear mixed models with any between-subject design (not just two groups). Planning a new tortuosity study illustrates how the new methods help to avoid sample sizes that are too small or too large while still controlling the type I error rate. PMID:17318914
Hammerstrom, Kamille K; Ranasinghe, J Ananda; Weisberg, Stephen B; Oliver, John S; Fairey, W Russell; Slattery, Peter N; Oakden, James M
2012-10-01
Benthic macrofauna are used extensively for environmental assessment, but the area sampled and sieve sizes used to capture animals often differ among studies. Here, we sampled 80 sites using 3 different sized sampling areas (0.1, 0.05, 0.0071 m(2)) and sieved those sediments through each of 2 screen sizes (0.5, 1 mm) to evaluate their effect on number of individuals, number of species, dominance, nonmetric multidimensional scaling (MDS) ordination, and benthic community condition indices that are used to assess sediment quality in California. Sample area had little effect on abundance but substantially affected numbers of species, which are not easily scaled to a standard area. Sieve size had a substantial effect on both measures, with the 1-mm screen capturing only 74% of the species and 68% of the individuals collected in the 0.5-mm screen. These differences, though, had little effect on the ability to differentiate samples along gradients in ordination space. Benthic indices generally ranked sample condition in the same order regardless of gear, although the absolute scoring of condition was affected by gear type. The largest differences in condition assessment were observed for the 0.0071-m(2) gear. Benthic indices based on numbers of species were more affected than those based on relative abundance, primarily because we were unable to scale species number to a common area as we did for abundance. Copyright © 2010 SETAC.
ERIC Educational Resources Information Center
Neel, John H.; Stallings, William M.
An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…
76 FR 31575 - United States Standards for Grades of Frozen Onions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-01
... processors of frozen onions in the United States. The petition provided information on style, sample size... change in the style designations for minced style, and a correction to the text. The members agreed with the proposed section concerning requirements for Styles, Type I, Whole onions and Type II, Pearl...
NASA Astrophysics Data System (ADS)
Tian, Shili; Pan, Yuepeng; Wang, Jian; Wang, Yuesi
2016-11-01
Current science and policy requirements have focused attention on the need to expand and improve particulate matter (PM) sampling methods. To explore how sampling filter type affects artifacts in PM composition measurements, size-resolved particulate SO42-, NO3- and NH4+ (SNA) were measured on quartz fiber filters (QFF), glass fiber filters (GFF) and cellulose membranes (CM) concurrently in an urban area of Beijing on both clean and hazy days. The results showed that SNA concentrations in most of the size fractions exhibited the following patterns on different filters: CM > QFF > GFF for NH4+; GFF > QFF > CM for SO42-; and GFF > CM > QFF for NO3-. The different patterns in coarse particles were mainly affected by filter acidity, and that in fine particles were mainly affected by hygroscopicity of the filters (especially in size fraction of 0.65-2.1 μm). Filter acidity and hygroscopicity also shifted the peaks of the annual mean size distributions of SNA on QFF from 0.43-0.65 μm on clean days to 0.65-1.1 μm on hazy days. However, this size shift was not as distinct for samples measured with CM and GFF. In addition, relative humidity (RH) and pollution levels are important factors that can enhance particulate size mode shifts of SNA on clean and hazy days. Consequently, the annual mean size distributions of SNA had maxima at 0.65-1.1 μm for QFF samples and 0.43-0.65 μm for GFF and CM samples. Compared with NH4+ and SO42-, NO3- is more sensitive to RH and pollution levels, accordingly, the annual mean size distribution of NO3- exhibited peak at 0.65-1.1 μm for CM samples instead of 0.43-0.65 μm. These methodological uncertainties should be considered when quantifying the concentrations and size distributions of SNA under different RH and haze conditions.
ERIC Educational Resources Information Center
Fidalgo, Angel M.; Ferreres, Doris; Muniz, Jose
2004-01-01
Sample-size restrictions limit the contingency table approaches based on asymptotic distributions, such as the Mantel-Haenszel (MH) procedure, for detecting differential item functioning (DIF) in many practical applications. Within this framework, the present study investigated the power and Type I error performance of empirical and inferential…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Haojie; Dhomkar, Siddharth; Roy, Bidisha
2014-10-28
For submonolayer quantum dot (QD) based photonic devices, size and density of QDs are critical parameters, the probing of which requires indirect methods. We report the determination of lateral size distribution of type-II ZnTe/ZnSe stacked submonolayer QDs, based on spectral analysis of the optical signature of Aharanov-Bohm (AB) excitons, complemented by photoluminescence studies, secondary-ion mass spectroscopy, and numerical calculations. Numerical calculations are employed to determine the AB transition magnetic field as a function of the type-II QD radius. The study of four samples grown with different tellurium fluxes shows that the lateral size of QDs increases by just 50%, evenmore » though tellurium concentration increases 25-fold. Detailed spectral analysis of the emission of the AB exciton shows that the QD radii take on only certain values due to vertical correlation and the stacked nature of the QDs.« less
Phase transformations in a Cu−Cr alloy induced by high pressure torsion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korneva, Anna, E-mail: a.korniewa@imim.pl; Straumal, Boris; Institut für Nanotechnologie, Karlsruher Institut für Technologie, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen
2016-04-15
Phase transformations induced by high pressure torsion (HPT) at room temperature in two samples of the Cu-0.86 at.% Cr alloy, pre-annealed at 550 °C and 1000 °C, were studied in order to obtain two different initial states for the HPT procedure. Observation of microstructure of the samples before HPT revealed that the sample annealed at 550 °C contained two types of Cr precipitates in the Cu matrix: large particles (size about 500 nm) and small ones (size about 70 nm). The sample annealed at 1000 °C showed only a little fraction of Cr precipitates (size about 2 μm). The subsequentmore » HPT process resulted in the partial dissolution of Cr precipitates in the first sample and dissolution of Cr precipitates with simultaneous decomposition of the supersaturated solid solution in another. However, the resulting microstructure of the samples after HPT was very similar from the standpoint of grain size, phase composition, texture analysis and hardness measurements. - Highlights: • Cu−Cr alloy with two different initial states was deformed by HPT. • Phase transformations in the deformed materials were studied. • SEM, TEM and X-ray diffraction techniques were used for microstructure analysis. • HPT leads to formation the same microstructure independent of the initial state.« less
NASA Astrophysics Data System (ADS)
Masalaite, Agne; Garbaras, Andrius; Garbariene, Inga; Ceburnis, Darius; Martuzevicius, Dainius; Puida, Egidijus; Kvietkus, Kestutis; Remeikis, Vidmantas
2014-05-01
Biomass burning is the largest source of primary fine fraction carbonaceous particles and the second largest source of trace gases in the global atmosphere with a strong effect not only on the regional scale but also in areas distant from the source . Many studies have often assumed no significant carbon isotope fractionation occurring between black carbon and the original vegetation during combustion. However, other studies suggested that stable carbon isotope ratios of char or BC may not reliably reflect carbon isotopic signatures of the source vegetation. Overall, the apparently conflicting results throughout the literature regarding the observed fractionation suggest that combustion conditions may be responsible for the observed effects. The purpose of the present study was to gather more quantitative information on carbonaceous aerosols produced in controlled biomass burning, thereby having a potential impact on interpreting ambient atmospheric observations. Seven different biomass fuel types were burned under controlled conditions to determine the effect of the biomass type on the emitted particulate matter mass and stable carbon isotope composition of bulk and size segregated particles. Size segregated aerosol particles were collected using the total suspended particle (TSP) sampler and a micro-orifice uniform deposit impactor (MOUDI). The results demonstrated that particle emissions were dominated by the submicron particles in all biomass types. However, significant differences in emissions of submicron particles and their dominant sizes were found between different biomass fuels. The largest negative fractionation was obtained for the wood pellet fuel type while the largest positive isotopic fractionation was observed during the buckwheat shells combustion. The carbon isotope composition of MOUDI samples compared very well with isotope composition of TSP samples indicating consistency of the results. The measurements of the stable carbon isotope ratio in size segregated aerosol particles suggested that combustion processes could strongly affect isotopic fractionation in aerosol particles of different sizes thereby potentially affecting an interpretation of ambient atmospheric observations.
Siers, Shane R.; Savidge, Julie A.; Reed, Robert
2017-01-01
Localized ecological conditions have the potential to induce variation in population characteristics such as size distributions and body conditions. The ability to generalize the influence of ecological characteristics on such population traits may be particularly meaningful when those traits influence prospects for successful management interventions. To characterize variability in invasive Brown Treesnake population attributes within and among habitat types, we conducted systematic and seasonally-balanced surveys, collecting 100 snakes from each of 18 sites: three replicates within each of six major habitat types comprising 95% of Guam’s geographic expanse. Our study constitutes one of the most comprehensive and controlled samplings of any published snake study. Quantile regression on snake size and body condition indicated significant ecological heterogeneity, with a general trend of relative consistency of size classes and body conditions within and among scrub and Leucaena forest habitat types and more heterogeneity among ravine forest, savanna, and urban residential sites. Larger and more robust snakes were found within some savanna and urban habitat replicates, likely due to relative availability of larger prey. Compared to more homogeneous samples in the wet season, variability in size distributions and body conditions was greater during the dry season. Although there is evidence of habitat influencing Brown Treesnake populations at localized scales (e.g., the higher prevalence of larger snakes—particularly males—in savanna and urban sites), the level of variability among sites within habitat types indicates little ability to make meaningful predictions about these traits at unsampled locations. Seasonal variability within sites and habitats indicates that localized population characterization should include sampling in both wet and dry seasons. Extreme values at single replicates occasionally influenced overall habitat patterns, while pooling replicates masked variability among sites. A full understanding of population characteristics should include an assessment of variability both at the site and habitat level.
Siers, Shane R.; Savidge, Julie A.; Reed, Robert N.
2017-01-01
Localized ecological conditions have the potential to induce variation in population characteristics such as size distributions and body conditions. The ability to generalize the influence of ecological characteristics on such population traits may be particularly meaningful when those traits influence prospects for successful management interventions. To characterize variability in invasive Brown Treesnake population attributes within and among habitat types, we conducted systematic and seasonally-balanced surveys, collecting 100 snakes from each of 18 sites: three replicates within each of six major habitat types comprising 95% of Guam’s geographic expanse. Our study constitutes one of the most comprehensive and controlled samplings of any published snake study. Quantile regression on snake size and body condition indicated significant ecological heterogeneity, with a general trend of relative consistency of size classes and body conditions within and among scrub and Leucaena forest habitat types and more heterogeneity among ravine forest, savanna, and urban residential sites. Larger and more robust snakes were found within some savanna and urban habitat replicates, likely due to relative availability of larger prey. Compared to more homogeneous samples in the wet season, variability in size distributions and body conditions was greater during the dry season. Although there is evidence of habitat influencing Brown Treesnake populations at localized scales (e.g., the higher prevalence of larger snakes—particularly males—in savanna and urban sites), the level of variability among sites within habitat types indicates little ability to make meaningful predictions about these traits at unsampled locations. Seasonal variability within sites and habitats indicates that localized population characterization should include sampling in both wet and dry seasons. Extreme values at single replicates occasionally influenced overall habitat patterns, while pooling replicates masked variability among sites. A full understanding of population characteristics should include an assessment of variability both at the site and habitat level. PMID:28570632
Siers, Shane R; Savidge, Julie A; Reed, Robert N
2017-01-01
Localized ecological conditions have the potential to induce variation in population characteristics such as size distributions and body conditions. The ability to generalize the influence of ecological characteristics on such population traits may be particularly meaningful when those traits influence prospects for successful management interventions. To characterize variability in invasive Brown Treesnake population attributes within and among habitat types, we conducted systematic and seasonally-balanced surveys, collecting 100 snakes from each of 18 sites: three replicates within each of six major habitat types comprising 95% of Guam's geographic expanse. Our study constitutes one of the most comprehensive and controlled samplings of any published snake study. Quantile regression on snake size and body condition indicated significant ecological heterogeneity, with a general trend of relative consistency of size classes and body conditions within and among scrub and Leucaena forest habitat types and more heterogeneity among ravine forest, savanna, and urban residential sites. Larger and more robust snakes were found within some savanna and urban habitat replicates, likely due to relative availability of larger prey. Compared to more homogeneous samples in the wet season, variability in size distributions and body conditions was greater during the dry season. Although there is evidence of habitat influencing Brown Treesnake populations at localized scales (e.g., the higher prevalence of larger snakes-particularly males-in savanna and urban sites), the level of variability among sites within habitat types indicates little ability to make meaningful predictions about these traits at unsampled locations. Seasonal variability within sites and habitats indicates that localized population characterization should include sampling in both wet and dry seasons. Extreme values at single replicates occasionally influenced overall habitat patterns, while pooling replicates masked variability among sites. A full understanding of population characteristics should include an assessment of variability both at the site and habitat level.
Cetinić, Ivona; Poulton, Nicole; Slade, Wayne H
2016-09-05
Many optical and biogeochemical data sets, crucial for algorithm development and satellite data validation, are collected using underway seawater systems over the course of research cruises. Phytoplankton and particle size distribution (PSD) in the ocean is a key measurement, required in oceanographic research and ocean optics. Using a data set collected in the North Atlantic, spanning different oceanic water types, we outline the differences observed in concurrent samples collected from two different flow-through systems: a permanently plumbed science seawater supply with an impeller pump, and an independent system with shorter, clean tubing runs and a diaphragm pump. We observed an average of 40% decrease in phytoplankton counts, and significant changes to the PSD in 10-45 µm range, when comparing impeller and diaphragm pump systems. Change in PSD seems to be more dependent on the type of the phytoplankton, than the size, with photosynthetic ciliates displaying the largest decreases in cell counts (78%). Comparison of chlorophyll concentrations across the two systems demonstrated lower sensitivity to sampling system type. Observed changes in several measured biogeochemical parameters (associated with phytoplankton size distribution) using the two sampling systems, should be used as a guide towards building best practices when it comes to the deployment of flow-through systems in the field for examining optics and biogeochemistry. Using optical models, we evaluated potential impact of the observed change in measured phytoplankton size spectra onto scattering measurements, resulting in significant differences between modeled optical properties across systems (~40%). Researchers should be aware of the methods used with previously collected data sets, and take into consideration the potentially significant and highly variable ecosystem-dependent biases in designing field studies in the future.
Bed-sediment grain-size and morphologic data from Suisun, Grizzly, and Honker Bays, CA, 1998-2002
Hampton, Margaret A.; Snyder, Noah P.; Chin, John L.; Allison, Dan W.; Rubin, David M.
2003-01-01
The USGS Place Based Studies Program for San Francisco Bay investigates this sensitive estuarine system to aid in resource management. As part of the inter-disciplinary research program, the USGS collected side-scan sonar data and bed-sediment samples from north San Francisco Bay to characterize bed-sediment texture and investigate temporal trends in sedimentation. The study area is located in central California and consists of Suisun Bay, and Grizzly and Honker Bays, sub-embayments of Suisun Bay. During the study (1998-2002), the USGS collected three side-scan sonar data sets and approximately 300 sediment samples. The side-scan data revealed predominantly fine-grained material on the bayfloor. We also mapped five different bottom types from the data set, categorized as featureless, furrows, sand waves, machine-made, and miscellaneous. We performed detailed grain-size and statistical analyses on the sediment samples. Overall, we found that grain size ranged from clay to fine sand, with the coarsest material in the channels and finer material located in the shallow bays. Grain-size analyses revealed high spatial variability in size distributions in the channel areas. In contrast, the shallow regions exhibited low spatial variability and consistent sediment size over time.
Partitioning heritability by functional annotation using genome-wide association summary statistics.
Finucane, Hilary K; Bulik-Sullivan, Brendan; Gusev, Alexander; Trynka, Gosia; Reshef, Yakir; Loh, Po-Ru; Anttila, Verneri; Xu, Han; Zang, Chongzhi; Farh, Kyle; Ripke, Stephan; Day, Felix R; Purcell, Shaun; Stahl, Eli; Lindstrom, Sara; Perry, John R B; Okada, Yukinori; Raychaudhuri, Soumya; Daly, Mark J; Patterson, Nick; Neale, Benjamin M; Price, Alkes L
2015-11-01
Recent work has demonstrated that some functional categories of the genome contribute disproportionately to the heritability of complex diseases. Here we analyze a broad set of functional elements, including cell type-specific elements, to estimate their polygenic contributions to heritability in genome-wide association studies (GWAS) of 17 complex diseases and traits with an average sample size of 73,599. To enable this analysis, we introduce a new method, stratified LD score regression, for partitioning heritability from GWAS summary statistics while accounting for linked markers. This new method is computationally tractable at very large sample sizes and leverages genome-wide information. Our findings include a large enrichment of heritability in conserved regions across many traits, a very large immunological disease-specific enrichment of heritability in FANTOM5 enhancers and many cell type-specific enrichments, including significant enrichment of central nervous system cell types in the heritability of body mass index, age at menarche, educational attainment and smoking behavior.
A Bayesian-frequentist two-stage single-arm phase II clinical trial design.
Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen
2012-08-30
It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Li, Peng; Redden, David T.
2014-01-01
SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738
Tanenbaum, Sandra J
2011-01-01
This research compares two types of consumer organizations in one state in order to explore the significance of organizational independence for internal structure/operations and external relationships. The first type, consumeroperated service organizations (COSOs), are independent and fully self-governing; the second are peer-support service organizations (PSSOs), which are part of larger non-consumer entities. Mail surveys were completed by COSO and PSSO directors of a geographically representative sample of organizations; telephone interviews were conducted with a sub-sample. Owing to small sample size, matched COSO-PSSO pairs were analyzed using non-parametric statistics. COSOs and PSSOs are similar in some ways, e.g., types of services provided, but significantly different on internal variables, such as budget size, and external variables, such as number of relationships with community groups. Organizational independence appears to be a significant characteristic for consumer service organizations and should be encouraged by funders and among participants. Funders might establish administrative and/or programmatic measures to support consumer organizations that are independent or moving toward independence; their participants would also benefit from the provision, by authorities or advocates, of materials to guide organizations toward, for example, 501(c)3 status.
NASA Astrophysics Data System (ADS)
Ohsuka, Shinji; Ohba, Akira; Onoda, Shinobu; Nakamoto, Katsuhiro; Nakano, Tomoyasu; Miyoshi, Motosuke; Soda, Keita; Hamakubo, Takao
2014-09-01
We constructed a laboratory-size three-dimensional water window x-ray microscope that combines wide-field transmission x-ray microscopy with tomographic reconstruction techniques, and observed bio-medical samples to evaluate its applicability to life science research fields. It consists of a condenser and an objective grazing incidence Wolter type I mirror, an electron-impact type oxygen Kα x-ray source, and a back-illuminated CCD for x-ray imaging. A spatial resolution limit of around 1.0 line pairs per micrometer was obtained for two-dimensional transmission images, and 1-μm scale three-dimensional fine structures were resolved.
Ohsuka, Shinji; Ohba, Akira; Onoda, Shinobu; Nakamoto, Katsuhiro; Nakano, Tomoyasu; Miyoshi, Motosuke; Soda, Keita; Hamakubo, Takao
2014-09-01
We constructed a laboratory-size three-dimensional water window x-ray microscope that combines wide-field transmission x-ray microscopy with tomographic reconstruction techniques, and observed bio-medical samples to evaluate its applicability to life science research fields. It consists of a condenser and an objective grazing incidence Wolter type I mirror, an electron-impact type oxygen Kα x-ray source, and a back-illuminated CCD for x-ray imaging. A spatial resolution limit of around 1.0 line pairs per micrometer was obtained for two-dimensional transmission images, and 1-μm scale three-dimensional fine structures were resolved.
A size-dependent constitutive model of bulk metallic glasses in the supercooled liquid region
Yao, Di; Deng, Lei; Zhang, Mao; Wang, Xinyun; Tang, Na; Li, Jianjun
2015-01-01
Size effect is of great importance in micro forming processes. In this paper, micro cylinder compression was conducted to investigate the deformation behavior of bulk metallic glasses (BMGs) in supercooled liquid region with different deformation variables including sample size, temperature and strain rate. It was found that the elastic and plastic behaviors of BMGs have a strong dependence on the sample size. The free volume and defect concentration were introduced to explain the size effect. In order to demonstrate the influence of deformation variables on steady stress, elastic modulus and overshoot phenomenon, four size-dependent factors were proposed to construct a size-dependent constitutive model based on the Maxwell-pulse type model previously presented by the authors according to viscosity theory and free volume model. The proposed constitutive model was then adopted in finite element method simulations, and validated by comparing the micro cylinder compression and micro double cup extrusion experimental data with the numerical results. Furthermore, the model provides a new approach to understanding the size-dependent plastic deformation behavior of BMGs. PMID:25626690
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A field instrument for quantitative determination of beryllium by activation analysis
Vaughn, William W.; Wilson, E.E.; Ohm, J.M.
1960-01-01
A low-cost instrument has been developed for quantitative determinations of beryllium in the field by activation analysis. The instrument makes use of the gamma-neutron reaction between gammas emitted by an artificially radioactive source (Sb124) and beryllium as it occurs in nature. The instrument and power source are mounted in a panel-type vehicle. Samples are prepared by hand-crushing the rock to approximately ?-inch mesh size and smaller. Sample volumes are kept constant by means of a standard measuring cup. Instrument calibration, made by using standards of known BeO content, indicates the analyses are reproducible and accurate to within ? 0.25 percent BeO in the range from 1 to 20 percent BeO with a sample counting time of 5 minutes. Sensitivity of the instrument maybe increased somewhat by increasing the source size, the sample size, or by enlarging the cross-sectional area of the neutron-sensitive phosphor normal to the neutron flux.
Cytotoxicity and cellular uptake of different sized gold nanoparticles in ovarian cancer cells
NASA Astrophysics Data System (ADS)
Kumar, Dhiraj; Mutreja, Isha; Chitcholtan, Kenny; Sykes, Peter
2017-11-01
Nanomedicine has advanced the biomedical field with the availability of multifunctional nanoparticles (NPs) systems that can target a disease site enabling drug delivery and helping to monitor the disease. In this paper, we synthesised the gold nanoparticles (AuNPs) with an average size 18, 40, 60 and 80 nm, and studied the effect of nanoparticles size, concentration and incubation time on ovarian cancer cells namely, OVCAR5, OVCAR8, and SKOV3. The size measured by transmission electron microscopy images was slightly smaller than the hydrodynamic diameter; measured size by ImageJ as 14.55, 38.13, 56.88 and 78.56 nm. The cellular uptake was significantly controlled by the AuNPs size, concentration, and the cell type. The nanoparticles uptake increased with increasing concentration, and 18 and 80 nm AuNPs showed higher uptake ranging from 1.3 to 5.4 μg depending upon the concentration and cell type. The AuNPs were associated with a temporary reduction in metabolic activity, but metabolic activity remained more than 60% for all sample types; NPs significantly affected the cell proliferation activity in first 12 h. The increase in nanoparticle size and concentration induced the production of reactive oxygen species in 24 h.
On the repeated measures designs and sample sizes for randomized controlled trials.
Tango, Toshiro
2016-04-01
For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Mn Impurity in Bulk GaAs Crystals
NASA Astrophysics Data System (ADS)
Pawłowski, M.; Piersa, M.; Wołoś, A.; Palczewska, M.; Strzelecka, G.; Hruban, A.; Gosk, J.; Kamińska, M.; Twardowski, A.
2006-11-01
Magnetic and electron transport properties of GaAs:Mn crystals grown by Czochralski method were studied. Electron spin resonance showed the presence of Mn acceptor A in two charge states: singly ionized A- in the form of Mn2+(d5), and neutral A0 in the form of Mn2+(d5) plus a bound hole (h). It was possible to determine the relative concentration of both types of centers from intensity of the corresponding electron spin resonance lines. Magnetization measured as a function of magnetic field (up to 6 T) in the temperature range of 2-300 K revealed overall paramagnetic behavior of the samples. Effective spin was found to be about 1.5 value, which was consistent with the presence of two types of Mn configurations. In most of the studied samples the dominance of Mn2+(d5)+h configuration was established and it increased after annealing of native donors. The total value of Mn content was obtained from fitting of magnetization curves with the use of parameters obtained from electron spin resonance. In electron transport, two mechanisms of conductivity were observed: valence band transport dominated above 70 K, and hopping conductivity within Mn impurity band at lower temperatures. From the analysis of the hopping conductivity and using the obtained values of the total Mn content, the effective radius of Mn acceptor in GaAs was estimated as a = 11 ± 3 Å.
Small mammal habitat associations in poletimber and sawtimber stands of four forest cover types
Richard M. DeGraaf; Dana P. Snyder; Barbara J. Hill
1991-01-01
Small mammal distribution was examined in poletimber and sawtimber stands of four forest cover types in northern New England: northern hardwoods, red maple, balsam fir, and red spruce-balsam fir. During 1980 and 1981, eight stands on the White Mountain National Forest, NH, were sampled with four trap types (three sizes of snap traps and one pit-fall) for 16 000 trap-...
NASA Astrophysics Data System (ADS)
Kwong, Lian E.; Pakhomov, Evgeny A.; Suntsov, Andrey V.; Seki, Michael P.; Brodeur, Richard D.; Pakhomova, Larisa G.; Domokos, Réka
2018-05-01
A micronekton intercalibration experiment was conducted off the southwest coast of Oahu Island, Hawaii in October 2004. Day and night samples were collected in the epipelagic and mesopelagic zones using three micronekton sampling gears: the Cobb Trawl, the Isaacs-Kidd Midwater Trawl (IKMT), and the Hokkaido University Frame Trawl (HUFT). Taxonomic composition and contribution by main size groups to total catch varied among gear types. However, the three gears exhibited similar taxonomic composition for macrozooplankton and micronekton ranging from 20 to 100 mm length (MM20-100). The HUFT and IKMT captured more mesozooplankton and small MM20-100, while the Cobb trawl selected towards larger MM20-100 and nekton. Taxonomic composition was described and inter-compared among gears. The relative efficacy of the three gears was assessed, and size dependent intercalibration coefficients were developed for MM20-100.
Effect of bait and gear type on channel catfish catch and turtle bycatch in a reservoir
Cartabiano, Evan C.; Stewart, David R.; Long, James M.
2014-01-01
Hoop nets have become the preferred gear choice to sample channel catfish Ictalurus punctatus but the degree of bycatch can be high, especially due to the incidental capture of aquatic turtles. While exclusion and escapement devices have been developed and evaluated, few have examined bait choice as a method to reduce turtle bycatch. The use of Zote™ soap has shown considerable promise to reduce bycatch of aquatic turtles when used with trotlines but its effectiveness in hoop nets has not been evaluated. We sought to determine the effectiveness of hoop nets baited with cheese bait or Zote™ soap and trotlines baited with shad or Zote™ soap as a way to sample channel catfish and prevent capture of aquatic turtles. We used a repeated-measures experimental design and treatment combinations were randomly assigned using a Latin-square arrangement. Eight sampling locations were systematically selected and then sampled with either hoop nets or trotlines using Zote™ soap (both gears), waste cheese (hoop nets), or cut shad (trotlines). Catch rates did not statistically differ among the gear–bait-type combinations. Size bias was evident with trotlines consistently capturing larger sized channel catfish compared to hoop nets. Results from a Monte Carlo bootstrapping procedure estimated the number of samples needed to reach predetermined levels of sampling precision to be lowest for trotlines baited with soap. Moreover, trotlines baited with soap caught no aquatic turtles, while hoop nets captured many turtles and had high mortality rates. We suggest that Zote™ soap used in combination with multiple hook sizes on trotlines may be a viable alternative to sample channel catfish and reduce bycatch of aquatic turtles.
Segmented polynomial taper equation incorporating years since thinning for loblolly pine plantations
A. Gordon Holley; Thomas B. Lynch; Charles T. Stiff; William Stansfield
2010-01-01
Data from 108 trees felled from 16 loblolly pine stands owned by Temple-Inland Forest Products Corp. were used to determine effects of years since thinning (YST) on stem taper using the MaxâBurkhart type segmented polynomial taper model. Sample tree YST ranged from two to nine years prior to destructive sampling. In an effort to equalize sample sizes, tree data were...
Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L
2017-02-06
Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, Renzhong; Department of Technology and Physics, Zhengzhou University of Light Industry, Zhengzhou 450002; Zhao, Gaoyang, E-mail: zhaogy@xaut.edu.cn
Graphical abstract: The dielectric constant decreases with Ta doping, increases with Y doping and keeps almost constant with Zr doping compared with that of pure CCTO. - Highlights: • Y and Ta doping cause different defect types and concentration. • Defect influences the grain boundary mobility and results in different grain size. • Y doping increases the dielectric constant and decreases the nonlinear property. • Ta doping decreases the dielectric constant and enhances the nonlinear property. • Zr doped sample has nearly the defect type and dielectric properties as CaCu{sub 3}Ti{sub 4}O{sub 12}. - Abstract: The microstructure, dielectric and electricalmore » properties of CaCu{sub 3}Ti{sub 4−x}R{sub x}O{sub 12} (R = Y, Zr, Ta; x = 0 and 0.005) ceramics were investigated by XRD, Raman spectra, SEM and dielectric spectrum measurements. Positron annihilation measurements have been performed to investigate the influence of doping on the defects. The results show that all samples form a single crystalline phase. Y and Ta doping cause different defect types and increase the defect size and concentration, which influence the mobility of grain boundary and result in the different grain size. Y doping increases the dielectric constant and decreases the nonlinear property while Ta doping lead to an inverse result. Zr-doped sample has nearly the defect type, grain morphology and dielectric properties as pure CaCu{sub 3}Ti{sub 4}O{sub 12}. The effects of microstructure including the grain morphology and the vacancy defects on the mechanism of the dielectric and electric properties by doping are discussed.« less
Overweight in Adolescents: Differences per Type of Education. Does One Size Fit All?
ERIC Educational Resources Information Center
Vissers, Dirk; Devoogdt, Nele; Gebruers, Nick; Mertens, Ilse; Truijen, Steven; Van Gaal, Luc
2008-01-01
Objective: To assess the lifestyle and prevalence of overweight among 16- to 18-year-old adolescents attending 4 different types of secondary education (SE). Design: Cross-sectional school-based survey. Participants: A community sample of 994 adolescents (body mass index [BMI]: 15-43 kg/m[superscript 2]). Variables Measured: Overweight and obesity…
Optimal design of a plot cluster for monitoring
Charles T. Scott
1993-01-01
Traveling costs incurred during extensive forest surveys make cluster sampling cost-effective. Clusters are specified by the type of plots, plot size, number of plots, and the distance between plots within the cluster. A method to determine the optimal cluster design when different plot types are used for different forest resource attributes is described. The method...
Sample Size Determination for Rasch Model Tests
ERIC Educational Resources Information Center
Draxler, Clemens
2010-01-01
This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less
Assessing accuracy of point fire intervals across landscapes with simulation modelling
Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall
2007-01-01
We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...
Obesity and vehicle type as risk factors for injury caused by motor vehicle collision.
Donnelly, John P; Griffin, Russell Lee; Sathiakumar, Nalini; McGwin, Gerald
2014-04-01
This study sought to describe variations in the risk of motor vehicle collision (MVC) injury and death by occupant body mass index (BMI) class and vehicle type. We hypothesized that the relationship between BMI and the risk of MVC injury or mortality would be modified by vehicle type. This is a retrospective cohort study of occupants involved in MVCs using data from the Crash Injury Research and Engineering Network and the National Automotive Sampling System Crashworthiness Data System. Occupants were grouped based on vehicle body style (passenger car, sport utility vehicle, or light truck) and vehicle size (compact or normal, corresponding to below- or above-average curb weight). The relationship between occupant BMI class (underweight, normal weight, overweight, or obese) and risk of injury or mortality was examined for each vehicle type. Odds ratios (ORs) adjusted for various occupant and collision characteristics were estimated. Of an estimated 44 million occupants of MVCs sampled from 2000 to 2009, 37.1% sustained an injury. We limited our analysis to injuries achieving an Abbreviated Injury Scale (AIS) score of 2 or more severe, totaling 17 million injuries. Occupants differed substantially in terms of demographic and collision characteristics. After adjustment for confounding factors, we found that obesity was a risk factor for mortality caused by MVC (OR, 1.6; 95% confidence interval [CI], 1.2-2.0). When stratified by vehicle type, we found that obesity was a risk factor for mortality in larger vehicles, including any-sized light trucks (OR, 2.1; 95% CI, 1.3-3.5), normal-sized passenger cars (OR, 1.6; 95% CI, 1.1-2.3), and normal-sized sports utility vehicles or vans (OR, 2.0; 95% CI, 1.0-3.8). Being overweight was a risk factor in any-sized light trucks (OR, 1.5; 95% CI, 1.1-2.1). We identified a significant interaction between occupant BMI class and vehicle type in terms of MVC-related mortality risk. Both factors should be taken into account when considering occupant safety, and additional study is needed to determine underlying causes of the observed relationships. Epidemiologic study, level III.
Xu, Huacheng; Guo, Laodong
2017-06-15
Dissolved organic matter (DOM) is ubiquitous in natural waters. The ecological role and environmental fate of DOM are highly related to the chemical composition and size distribution. To evaluate size-dependent DOM quantity and quality, water samples were collected from river, lake, and coastal marine environments and size fractionated through a series of micro- and ultra-filtrations with different membranes having different pore-sizes/cutoffs, including 0.7, 0.4, and 0.2 μm and 100, 10, 3, and 1 kDa. Abundance of dissolved organic carbon, total carbohydrates, chromophoric and fluorescent components in the filtrates decreased consistently with decreasing filter/membrane cutoffs, but with a rapid decline when the filter cutoff reached 3 kDa, showing an evident size-dependent DOM abundance and composition. About 70% of carbohydrates and 90% of humic- and protein-like components were measured in the <3 kDa fraction in freshwater samples, but these percentages were higher in the seawater sample. Spectroscopic properties of DOM, such as specific ultraviolet absorbance, spectral slope, and biological and humification indices also varied significantly with membrane cutoffs. In addition, different ultrafiltration membranes with the same manufacture-rated cutoff also gave rise to different DOM retention efficiencies and thus different colloidal abundances and size spectra. Thus, the size-dependent DOM properties were related to both sample types and membranes used. Our results here provide not only baseline data for filter pore-size selection when exploring DOM ecological and environmental roles, but also new insights into better understanding the physical definition of DOM and its size continuum in quantity and quality in aquatic environments. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tadayyon, Ghazal; Mazinani, Mohammad; Guo, Yina
Martensitic evolution in Ti-rich NiTi alloy, Ti50.5Ni49.5, has been investigated as a function of annealing, solution treatment and a combination thereof and a detailed electron microscopic investigation carried out. Self-accommodated martensite plates resulted in all heat treated samples. Martensitic < 011 > type II twins, which are common in NiTi shape memory alloys, was found in both as-received and heat-treated samples. Solution treated samples, additionally, showed {11-1} type I twinning was also found in samples that have been annealed after solution-treatment. Another common feature of the microstructure in both as-received and heat treated samples is the formation of Ti{sub 2}Nimore » precipitates. The size, number and dispersions of these precipitates can be controlled by resorting to a suitable heat treatment e.g. solution treatment.« less
Evaluation of residual uranium contamination in the dirt floor of an abandoned metal rolling mill.
Glassford, Eric; Spitz, Henry; Lobaugh, Megan; Spitler, Grant; Succop, Paul; Rice, Carol
2013-02-01
A single, large, bulk sample of uranium-contaminated material from the dirt floor of an abandoned metal rolling mill was separated into different types and sizes of aliquots to simulate samples that would be collected during site remediation. The facility rolled approximately 11,000 tons of hot-forged ingots of uranium metal approximately 60 y ago, and it has not been used since that time. Thirty small mass (≈ 0.7 g) and 15 large mass (≈ 70 g) samples were prepared from the heterogeneously contaminated bulk material to determine how measurements of the uranium contamination vary with sample size. Aliquots of bulk material were also resuspended in an exposure chamber to produce six samples of respirable particles that were obtained using a cascade impactor. Samples of removable surface contamination were collected by wiping 100 cm of the interior surfaces of the exposure chamber with 47-mm-diameter fiber filters. Uranium contamination in each of the samples was measured directly using high-resolution gamma ray spectrometry. As expected, results for isotopic uranium (i.e., U and U) measured with the large-mass and small-mass samples are significantly different (p < 0.001), and the coefficient of variation (COV) for the small-mass samples was greater than for the large-mass samples. The uranium isotopic concentrations measured in the air and on the wipe samples were not significantly different and were also not significantly different (p > 0.05) from results for the large- or small-mass samples. Large-mass samples are more reliable for characterizing heterogeneously distributed radiological contamination than small-mass samples since they exhibit the least variation compared to the mean. Thus, samples should be sufficiently large in mass to insure that the results are truly representative of the heterogeneously distributed uranium contamination present at the facility. Monitoring exposure of workers and the public as a result of uranium contamination resuspended during site remediation should be evaluated using samples of sufficient size and type to accommodate the heterogeneous distribution of uranium in the bulk material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papelis, Charalambos; Um, Wooyong; Russel, Charles E.
2003-03-28
The specific surface area of natural and manmade solid materials is a key parameter controlling important interfacial processes in natural environments and engineered systems, including dissolution reactions and sorption processes at solid-fluid interfaces. To improve our ability to quantify the release of trace elements trapped in natural glasses, the release of hazardous compounds trapped in manmade glasses, or the release of radionuclides from nuclear melt glass, we measured the specific surface area of natural and manmade glasses as a function of particle size, morphology, and composition. Volcanic ash, volcanic tuff, tektites, obsidian glass, and in situ vitrified rock were analyzed.more » Specific surface area estimates were obtained using krypton as gas adsorbent and the BET model. The range of surface areas measured exceeded three orders of magnitude. A tektite sample had the highest surface area (1.65 m2/g), while one of the samples of in situ vitrified rock had the lowest surf ace area (0.0016 m2/g). The specific surface area of the samples was a function of particle size, decreasing with increasing particle size. Different types of materials, however, showed variable dependence on particle size, and could be assigned to one of three distinct groups: (1) samples with low surface area dependence on particle size and surface areas approximately two orders of magnitude higher than the surface area of smooth spheres of equivalent size. The specific surface area of these materials was attributed mostly to internal porosity and surface roughness. (2) samples that showed a trend of decreasing surface area dependence on particle size as the particle size increased. The minimum specific surface area of these materials was between 0.1 and 0.01 m2/g and was also attributed to internal porosity and surface roughness. (3) samples whose surface area showed a monotonic decrease with increasing particle size, never reaching an ultimate surface area limit within the particle size range examined. The surface area results were consistent with particle morphology, examined by scanning electron microscopy, and have significant implications for the release of radionuclides and toxic metals in the environment.« less
Laboratory theory and methods for sediment analysis
Guy, Harold P.
1969-01-01
The diverse character of fluvial sediments makes the choice of laboratory analysis somewhat arbitrary and the pressing of sediment samples difficult. This report presents some theories and methods used by the Water Resources Division for analysis of fluvial sediments to determine the concentration of suspended-sediment samples and the particle-size distribution of both suspended-sediment and bed-material samples. Other analyses related to these determinations may include particle shape, mineral content, and specific gravity, the organic matter and dissolved solids of samples, and the specific weight of soils. The merits and techniques of both the evaporation and filtration methods for concentration analysis are discussed. Methods used for particle-size analysis of suspended-sediment samples may include the sieve pipet, the VA tube-pipet, or the BW tube-VA tube depending on the equipment available, the concentration and approximate size of sediment in the sample, and the settling medium used. The choice of method for most bed-material samples is usually limited to procedures suitable for sand or to some type of visual analysis for large sizes. Several tested forms are presented to help insure a well-ordered system in the laboratory to handle the samples, to help determine the kind of analysis required for each, to conduct the required processes, and to assist in the required computations. Use of the manual should further 'standardize' methods of fluvial sediment analysis among the many laboratories and thereby help to achieve uniformity and precision of the data.
C-Sphere Strength-Size Scaling in a Bearing-Grade Silicon Nitride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wereszczak, Andrew A; Jadaan, Osama M.; Kirkland, Timothy Philip
2008-01-01
A C-sphere specimen geometry was used to determine the failure strength distributions of a commercially available bearing-grade silicon nitride (Si3N4) having ball diameters of 12.7 and 25.4 mm. Strengths for both diameters were determined using the combination of failure load, C sphere geometry, and finite element analysis and fitted using two-parameter Weibull distributions. Effective areas of both diameters were estimated as a function of Weibull modulus and used to explore whether the strength distributions predictably strength-scaled between each size. They did not. That statistical observation suggested that the same flaw type did not limit the strength of both ball diametersmore » indicating a lack of material homogeneity between the two sizes. Optical fractography confirmed that. It showed there were two distinct strength-limiting flaw types in both ball diameters, that one flaw type was always associated with lower strength specimens, and that significantly higher fraction of the 24.5-mm-diameter c-sphere specimens failed from it. Predictable strength-size-scaling would therefore not result as a consequence of this because these flaw types were not homogenously distributed and sampled in both c-sphere geometries.« less
Thompson, William L.; Miller, Amy E.; Mortenson, Dorothy C.; Woodward, Andrea
2011-01-01
Monitoring natural resources in Alaskan national parks is challenging because of their remoteness, limited accessibility, and high sampling costs. We describe an iterative, three-phased process for developing sampling designs based on our efforts to establish a vegetation monitoring program in southwest Alaska. In the first phase, we defined a sampling frame based on land ownership and specific vegetated habitats within the park boundaries and used Path Distance analysis tools to create a GIS layer that delineated portions of each park that could be feasibly accessed for ground sampling. In the second phase, we used simulations based on landcover maps to identify size and configuration of the ground sampling units (single plots or grids of plots) and to refine areas to be potentially sampled. In the third phase, we used a second set of simulations to estimate sample size and sampling frequency required to have a reasonable chance of detecting a minimum trend in vegetation cover for a specified time period and level of statistical confidence. Results of the first set of simulations indicated that a spatially balanced random sample of single plots from the most common landcover types yielded the most efficient sampling scheme. Results of the second set of simulations were compared with field data and indicated that we should be able to detect at least a 25% change in vegetation attributes over 31. years by sampling 8 or more plots per year every five years in focal landcover types. This approach would be especially useful in situations where ground sampling is restricted by access.
Zhang, Song; Cao, Jing; Ahn, Chul
2017-02-20
We investigate the estimation of intervention effect and sample size determination for experiments where subjects are supposed to contribute paired binary outcomes with some incomplete observations. We propose a hybrid estimator to appropriately account for the mixed nature of observed data: paired outcomes from those who contribute complete pairs of observations and unpaired outcomes from those who contribute either pre-intervention or post-intervention outcomes. We theoretically prove that if incomplete data are evenly distributed between the pre-intervention and post-intervention periods, the proposed estimator will always be more efficient than the traditional estimator. A numerical research shows that when the distribution of incomplete data is unbalanced, the proposed estimator will be superior when there is moderate-to-strong positive within-subject correlation. We further derive a closed-form sample size formula to help researchers determine how many subjects need to be enrolled in such studies. Simulation results suggest that the calculated sample size maintains the empirical power and type I error under various design configurations. We demonstrate the proposed method using a real application example. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
"Magnitude-based inference": a statistical review.
Welsh, Alan H; Knight, Emma J
2015-04-01
We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.
Photographic techniques for characterizing streambed particle sizes
Whitman, Matthew S.; Moran, Edward H.; Ourso, Robert T.
2003-01-01
We developed photographic techniques to characterize coarse (>2-mm) and fine (≤2-mm) streambed particle sizes in 12 streams in Anchorage, Alaska. Results were compared with current sampling techniques to assess which provided greater sampling efficiency and accuracy. The streams sampled were wadeable and contained gravel—cobble streambeds. Gradients ranged from about 5% at the upstream sites to about 0.25% at the downstream sites. Mean particle sizes and size-frequency distributions resulting from digitized photographs differed significantly from those resulting from Wolman pebble counts for five sites in the analysis. Wolman counts were biased toward selecting larger particles. Photographic analysis also yielded a greater number of measured particles (mean = 989) than did the Wolman counts (mean = 328). Stream embeddedness ratings assigned from field and photographic observations were significantly different at 5 of the 12 sites, although both types of ratings showed a positive relationship with digitized surface fines. Visual estimates of embeddedness and digitized surface fines may both be useful indicators of benthic conditions, but digitizing surface fines produces quantitative rather than qualitative data. Benefits of the photographic techniques include reduced field time, minimal streambed disturbance, convenience of postfield processing, easy sample archiving, and improved accuracy and replication potential.
Apollo 15 coarse fines (4-10 mm): Sample classification, description and inventory
NASA Technical Reports Server (NTRS)
Powell, B. N.
1972-01-01
A particle by particle binocular microscopic examination of all of the Apollo 15 4-10 mm fines samples is reported. These particles are classified according to their macroscopic lithologic features in order to provide a basis for sample allocations and future study. The relatively large size of these particles renders them too vaulable to permit treatment along with the other bulk fines, yet they are too small (and numerous) to practically receive full individual descriptive treatment as given the larger rock samples. This examination, classification and description of subgroups represents a compromise treatment. In most cases and for many types of investigation the individual particles should be large enough to permit the application of more than one type of analysis.
Almutairy, Meznah; Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.
Torng, Eric
2018-01-01
Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989
Study of structural and magnetic properties of melt spun Nd2Fe13.6Zr0.4B ingot and ribbon
NASA Astrophysics Data System (ADS)
Amin, Muhammad; Siddiqi, Saadat A.; Ashfaq, Ahmad; Saleem, Murtaza; Ramay, Shahid M.; Mahmood, Asif; Al-Zaghayer, Yousef S.
2015-12-01
Nd2Fe13.6Zr0.4B hard magnetic material were prepared using arc-melting technique on a water-cooled copper hearth kept under argon gas atmosphere. The prepared samples, Nd2Fe13.6Zr0.4B ingot and ribbon are characterized using X-ray diffraction (XRD), scanning electron microscopy (SEM) for crystal structure determination and morphological studies, respectively. The magnetic properties of the samples have been explored using vibrating sample magnetometer (VSM). The lattice constants slightly increased due to the difference in the ionic radii of Fe and that of Zr. The bulk density decreased due to smaller molar weight and low density of Zr as compared to that of Fe. Ingot sample shows almost single crystalline phase with larger crystallite sizes whereas ribbon sample shows a mixture of amorphous and crystalline phases with smaller crystallite sizes. The crystallinity of the material was highly affected with high thermal treatments. Magnetic measurements show noticeable variation in magnetic behavior with the change in crystallite size. The sample prepared in ingot type shows soft while ribbon shows hard magnetic behavior.
Ca, Nguyen Xuan; Lien, V T K; Nghia, N X; Chi, T T K; Phan, The-Long
2015-11-06
We used wet chemical methods to synthesize core-shell nanocrystalline samples CdS(d)/ZnSe N , where d = 3-6 nm and N = 1-5 are the size of CdS cores and the number of monolayers grown on the cores, respectively. By annealing typical CdS(d)/ZnSe N samples (with d = 3 and 6 nm and N = 2) at 300 °C for various times t an = 10-600 min, we created an intermediate layer composed of Zn1-x Cd x Se and Cd1-x Zn x S alloys with various thicknesses. The formation of core-shell structures and intermediate layers was monitored by Raman scattering and UV-vis absorption spectrometers. Careful photoluminescence studies revealed that the as-prepared CdS(d)/ZnSe N samples with d = 5 nm and N = 2-4, and the annealed samples CdS(3 nm)/ZnSe2 with t an ≤ 60 min and CdS(6 nm)/ZnSe2 with t an ≤ 180 min, show the emission characteristics of type-II systems. Meanwhile, the other samples show the emission characteristics of type-I systems. These results prove that the partial separation of photoexcited carriers between the core and shell is dependent strongly on the engineered core-shell nanostructures, meaning the sizes of the core, shell, and intermediate layers. With the tunable luminescence properties, CdS-ZnSe-based core-shell materials are considered as promising candidates for multiple-exciton generation and single-photon sources.
Support vector regression to predict porosity and permeability: Effect of sample size
NASA Astrophysics Data System (ADS)
Al-Anazi, A. F.; Gates, I. D.
2012-02-01
Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.
A comparison of bacterial and fungal biomass in several cultivated soils.
Kaczmarek, W
1984-01-01
Bacterial and fungal biomass was estimated in incubated samples of three cultivated soils, the influence of glucose, ammonium nitrate and cattle slurry on its formation being studied. The microbial biomass was determined in stained microscopic preparations of soil suspension. Bacterial biomass in the control samples was from 0.17 to 0.66 mg dry wt per 1 g dry soil and independently of the applied supplements was on the average two times larger in muck soils than in sand. Fungal biomass in the control soils ranged from 0.013 to 0.161 mg dry wt per 1 g dry soil, no relationship being found between its size and the soil type. As a result, the ratio of the size of fungal to bacterial biomass was dependent on the soil type; in sand the fungal biomass corresponded to 1/3 of the bacterial biomass, and in muck soils--only to 1/7.
Kim, Eun Hye; Lee, Hwan Young; Yang, In Seok; Jung, Sang-Eun; Yang, Woo Ick; Shin, Kyoung-Jin
2016-05-01
The next-generation sequencing (NGS) method has been utilized to analyze short tandem repeat (STR) markers, which are routinely used for human identification purposes in the forensic field. Some researchers have demonstrated the successful application of the NGS system to STR typing, suggesting that NGS technology may be an alternative or additional method to overcome limitations of capillary electrophoresis (CE)-based STR profiling. However, there has been no available multiplex PCR system that is optimized for NGS analysis of forensic STR markers. Thus, we constructed a multiplex PCR system for the NGS analysis of 18 markers (13CODIS STRs, D2S1338, D19S433, Penta D, Penta E and amelogenin) by designing amplicons in the size range of 77-210 base pairs. Then, PCR products were generated from two single-sources, mixed samples and artificially degraded DNA samples using a multiplex PCR system, and were prepared for sequencing on the MiSeq system through construction of a subsequent barcoded library. By performing NGS and analyzing the data, we confirmed that the resultant STR genotypes were consistent with those of CE-based typing. Moreover, sequence variations were detected in targeted STR regions. Through the use of small-sized amplicons, the developed multiplex PCR system enables researchers to obtain successful STR profiles even from artificially degraded DNA as well as STR loci which are analyzed with large-sized amplicons in the CE-based commercial kits. In addition, successful profiles can be obtained from mixtures up to a 1:19 ratio. Consequently, the developed multiplex PCR system, which produces small size amplicons, can be successfully applied to STR NGS analysis of forensic casework samples such as mixtures and degraded DNA samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Methodological issues with adaptation of clinical trial design.
Hung, H M James; Wang, Sue-Jane; O'Neill, Robert T
2006-01-01
Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.
NASA Astrophysics Data System (ADS)
Kalaycı, Özlem A.; Duygulu, Özgür; Hazer, Baki
2013-01-01
This study refers to the synthesis and characterization of a novel organic/inorganic hybrid nanocomposite material containing cadmium sulfide (CdS) nanoparticles. For this purpose, a series of polypropylene (PP)-g-polyethylene glycol (PEG), PP-g-PEG comb-type amphiphilic graft copolymers were synthesized. PEGs with Mn = 400, 2000, 3350, and 8000 Da were used and the graft copolymers obtained were coded as PPEG400, PPEG2000, PPEG3350, and PPEG8000. CdS nanoparticles were formed in tetrahydrofuran solution of PP-g-PEG amphiphilic comb-type copolymer by the reaction between aqueous solutions of Na2S and Cd(CH3COO)2 simultaneously. Micelle formation of PPEG2000 comb-type amphiphilic graft copolymer in both solvent/non-solvent (petroleum ether-THF) by transmission electron microscopy (TEM). The optical characteristics, size morphology, phase analysis, and dispersion of CdS nanoparticles embedded in PPEG400, PPEG2000, PPEG3350, and PPEG8000 comb-type amphiphilic graft copolymer micelles were determined by high resolution TEM (HRTEM), energy dispersive spectroscopy, UV-vis spectroscopy, and fluorescence emission spectroscopy techniques. The aggregate size of PPEG2000-CdS is between 10 and 50 nm; however, in the case of PPEG400-CdS, PPEG3350-CdS, and PPEG8000-CdS samples, it is up to approximately 100 nm. The size of CdS quantum dots in the aggregates for PPEG2000 and PPEG8000 samples was observed as 5 nm by HRTEM analysis, and this result was also supported by UV-vis absorbance spectra and fluorescence emission spectra.
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.
The solvation study of carbon, silicon and their mixed nanotubes in water solution.
Hashemi Haeri, Haleh; Ketabi, Sepideh; Hashemianzadeh, Seyed Majid
2012-07-01
Nanotubes are believed to open the road toward different modern fields, either technological or biological. However, the applications of nanotubes have been badly impeded for the poor solubility in water which is especially essential for studies in the presence of living cells. Therefore, water soluble samples are in demand. Herein, the outcomes of Monte Carlo simulations of different sets of multiwall nanotubes immersed in water are reported. A number of multi wall nanotube samples, comprised of pure carbon, pure silicon and several mixtures of carbon and silicon are the subjects of study. The simulations are carried out in an (N,V,T) ensemble. The purpose of this report is to look at the effects of nanotube size (diameter) and nanotube type (pure carbon, pure silicon or a mixture of carbon and silicon) variation on solubility of multiwall nanotubes in terms of number of water molecules in shell volume. It is found that the solubility of the multi wall carbon nanotube samples is size independent, whereas multi wall silicon nanotube samples solubility varies with diameter of the inner tube. The higher solubility of samples containing silicon can be attributed to the larger atomic size of silicon atom which provides more direct contact with the water molecules. The other affecting factor is the bigger inter space (the space between inner and outer tube) in the case of silicon samples. Carbon type multi wall nanotubes appeared as better candidates for transporting water molecules through a multi wall nanotube structure, while in the case of water adsorption problems it is better to use multi wall silicon nanotubes or a mixture of multi wall carbon/ silicon nanotubes.
Anthrax Sampling and Decontamination: Technology Trade-Offs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Phillip N.; Hamachi, Kristina; McWilliams, Jennifer
2008-09-12
The goal of this project was to answer the following questions concerning response to a future anthrax release (or suspected release) in a building: 1. Based on past experience, what rules of thumb can be determined concerning: (a) the amount of sampling that may be needed to determine the extent of contamination within a given building; (b) what portions of a building should be sampled; (c) the cost per square foot to decontaminate a given type of building using a given method; (d) the time required to prepare for, and perform, decontamination; (e) the effectiveness of a given decontamination methodmore » in a given type of building? 2. Based on past experience, what resources will be spent on evaluating the extent of contamination, performing decontamination, and assessing the effectiveness of the decontamination in abuilding of a given type and size? 3. What are the trade-offs between cost, time, and effectiveness for the various sampling plans, sampling methods, and decontamination methods that have been used in the past?« less
Han, Yanxi; Li, Jinming
2017-10-26
In this era of precision medicine, molecular biology is becoming increasingly significant for the diagnosis and therapeutic management of non-small cell lung cancer. The specimen as the primary element of the whole testing flow is particularly important for maintaining the accuracy of gene alteration testing. Presently, the main sample types applied in routine diagnosis are tissue and cytology biopsies. Liquid biopsies are considered as the most promising alternatives when tissue and cytology samples are not available. Each sample type possesses its own strengths and weaknesses, pertaining to the disparity of sampling, preparation and preservation procedures, the heterogeneity of inter- or intratumors, the tumor cellularity (percentage and number of tumor cells) of specimens, etc., and none of them can individually be a "one size to fit all". Therefore, in this review, we summarized the strengths and weaknesses of different sample types that are widely used in clinical practice, offered solutions to reduce the negative impact of the samples and proposed an optimized strategy for choice of samples during the entire diagnostic course. We hope to provide valuable information to laboratories for choosing optimal clinical specimens to achieve comprehensive functional genomic landscapes and formulate individually tailored treatment plans for NSCLC patients that are in advanced stages.
Two-sample binary phase 2 trials with low type I error and low sample size
Litwin, Samuel; Basickes, Stanley; Ross, Eric A.
2017-01-01
Summary We address design of two-stage clinical trials comparing experimental and control patients. Our end-point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p0 and alternative that it is p0 among controls and p1 > p0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E ≥ m, with two-sample rules of the form E – C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. PMID:28118686
Performance of digital RGB reflectance color extraction for plaque lesion
NASA Astrophysics Data System (ADS)
Hashim, Hadzli; Taib, Mohd Nasir; Jailani, Rozita; Sulaiman, Saadiah; Baba, Roshidah
2005-01-01
Several clinical psoriasis lesion groups are been studied for digital RGB color features extraction. Previous works have used samples size that included all the outliers lying beyond the standard deviation factors from the peak histograms. This paper described the statistical performances of the RGB model with and without removing these outliers. Plaque lesion is experimented with other types of psoriasis. The statistical tests are compared with respect to three samples size; the original 90 samples, the first size reduction by removing outliers from 2 standard deviation distances (2SD) and the second size reduction by removing outliers from 1 standard deviation distance (1SD). Quantification of data images through the normal/direct and differential of the conventional reflectance method is considered. Results performances are concluded by observing the error plots with 95% confidence interval and findings of the inference T-tests applied. The statistical tests outcomes have shown that B component for conventional differential method can be used to distinctively classify plaque from the other psoriasis groups in consistent with the error plots finding with an improvement in p-value greater than 0.5.
Statistical theory and methodology for remote sensing data analysis
NASA Technical Reports Server (NTRS)
Odell, P. L.
1974-01-01
A model is developed for the evaluation of acreages (proportions) of different crop-types over a geographical area using a classification approach and methods for estimating the crop acreages are given. In estimating the acreages of a specific croptype such as wheat, it is suggested to treat the problem as a two-crop problem: wheat vs. nonwheat, since this simplifies the estimation problem considerably. The error analysis and the sample size problem is investigated for the two-crop approach. Certain numerical results for sample sizes are given for a JSC-ERTS-1 data example on wheat identification performance in Hill County, Montana and Burke County, North Dakota. Lastly, for a large area crop acreages inventory a sampling scheme is suggested for acquiring sample data and the problem of crop acreage estimation and the error analysis is discussed.
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
Godri, Krystal J.; Harrison, Roy M.; Evans, Tim; Baker, Timothy; Dunster, Christina; Mudway, Ian S.; Kelly, Frank J.
2011-01-01
As the incidence of respiratory and allergic symptoms has been reported to be increased in children attending schools in close proximity to busy roads, it was hypothesised that PM from roadside schools would display enhanced oxidative potential (OP). Two consecutive one-week air quality monitoring campaigns were conducted at seven school sampling sites, reflecting roadside and urban background in London. Chemical characteristics of size fractionated particulate matter (PM) samples were related to the capacity to drive biological oxidation reactions in a synthetic respiratory tract lining fluid. Contrary to hypothesised contrasts in particulate OP between school site types, no robust size-fractionated differences in OP were identified due high temporal variability in concentrations of PM components over the one-week sampling campaigns. For OP assessed both by ascorbate (OPAA m−3) and glutathione (OPGSH m−3) depletion, the highest OP per cubic metre of air was in the largest size fraction, PM1.9–10.2. However, when expressed per unit mass of particles OPAA µg−1 showed no significant dependence upon particle size, while OPGSH µg−1 had a tendency to increase with increasing particle size, paralleling increased concentrations of Fe, Ba and Cu. The two OP metrics were not significantly correlated with one another, suggesting that the glutathione and ascorbate depletion assays respond to different components of the particles. Ascorbate depletion per unit mass did not show the same dependence as for GSH and it is possible that other trace metals (Zn, Ni, V) or organic components which are enriched in the finer particle fractions, or the greater surface area of smaller particles, counter-balance the redox activity of Fe, Ba and Cu in the coarse particles. Further work with longer-term sampling and a larger suite of analytes is advised in order to better elucidate the determinants of oxidative potential, and to fuller explore the contrasts between site types. PMID:21818283
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Estimating the quadratic mean diameters of fine woody debris in forests of the United States
Christopher W. Woodall; Vicente J. Monleon
2010-01-01
Most fine woody debris (FWD) line-intersect sampling protocols and associated estimators require an approximation of the quadratic mean diameter (QMD) of each individual FWD size class. There is a lack of empirically derived QMDs by FWD size class and species/forest type across the U.S. The objective of this study is to evaluate a technique known as the graphical...
Thanh Noi, Phan; Kappas, Martin
2017-01-01
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets. PMID:29271909
Thanh Noi, Phan; Kappas, Martin
2017-12-22
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km² within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.
Determining chewing efficiency using a solid test food and considering all phases of mastication.
Liu, Ting; Wang, Xinmiao; Chen, Jianshe; van der Glas, Hilbert W
2018-07-01
Following chewing a solid food, the median particle size, X 50 , is determined after N chewing cycles, by curve-fitting of the particle size distribution. Reduction of X 50 with N is traditionally followed from N ≥ 15-20 cycles when using the artificial test food Optosil ® , because of initially unreliable values of X 50 . The aims of the study were (i) to enable testing at small N-values by using initial particles of appropriate size, shape and amount, and (ii) to compare measures of chewing ability, i.e. chewing efficiency (N needed to halve the initial particle size, N(1/2-Xo)) and chewing performance (X 50 at a particular N-value, X 50,N ). 8 subjects with a natural dentition chewed 4 types of samples of Optosil particles: (1) 8 cubes of 8 mm, border size relative to bin size (traditional test), (2) 9 half-cubes of 9.6 mm, mid-size; similar sample volume, (3) 4 half-cubes of 9.6 mm, and 2 half-cubes of 9.6 mm; reduced particle number and sample volume. All samples were tested with 4 N-values. Curve-fitting with a 2nd order polynomial function yielded log(X 50 )-log(N) relationships, after which N(1/2-Xo) and X 50,N were obtained. Reliable X 50 -values are obtained for all N-values when using half-cubes with a mid-size relative to bin sizes. By using 2 or 4 half-cubes, determination of N(1/2-Xo) or X 50,N needs less chewing cycles than traditionally. Chewing efficiency is preferable over chewing performance because of a comparison of inter-subject chewing ability at the same stage of food comminution and constant intra-subject and inter-subject ratios between and within samples respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.
The impact of multiple endpoint dependency on Q and I(2) in meta-analysis.
Thompson, Christopher Glen; Becker, Betsy Jane
2014-09-01
A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership. Copyright © 2014 John Wiley & Sons, Ltd.
Filter Membrane Effects on Water-Extractable Phosphorus Concentrations from Soil.
Norby, Jessica; Strawn, Daniel; Brooks, Erin
2018-03-01
To accurately assess P concentrations in soil extracts, standard laboratory practices for monitoring P concentrations are needed. Water-extractable P is a common analytical test to determine P availability for leaching from soils, and it is used to determine best management practices. Most P analytical tests require filtration through a filter membrane with 0.45-μm pore size to distinguish between particulate and dissolved P species. However, filter membrane type is rarely specified in method protocols, and many different types of membranes are available. In this study, three common filter membrane materials (polyether sulfone, nylon, and nitrocellulose), all with 0.45-μm pore sizes, were tested for analytical differences in total P concentrations and dissolved reactive P (DRP) concentrations in water extracts from six soils sampled from two regions. Three of the extracts from the six soil samples had different total P concentrations for all three membrane types. The other three soil extracts had significantly different total P results from at least one filter membrane type. Total P concentration differences were as great as 35%. The DRP concentrations in the extracts were dependent on filter type in five of the six soil types. Results from this research show that filter membrane type is an important parameter that affects concentrations of total P and DRP from soil extracts. Thus, membrane type should be specified in soil extraction protocols. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
NASA Astrophysics Data System (ADS)
Kohnová, Silvia; Gaál, Ladislav; Bacigál, Tomáš; Szolgay, Ján; Hlavčová, Kamila; Valent, Peter; Parajka, Juraj; Blöschl, Günter
2016-12-01
The case study aims at selecting optimal bivariate copula models of the relationships between flood peaks and flood volumes from a regional perspective with a particular focus on flood generation processes. Besides the traditional approach that deals with the annual maxima of flood events, the current analysis also includes all independent flood events. The target region is located in the northwest of Austria; it consists of 69 small and mid-sized catchments. On the basis of the hourly runoff data from the period 1976- 2007, independent flood events were identified and assigned to one of the following three types of flood categories: synoptic floods, flash floods and snowmelt floods. Flood events in the given catchment are considered independent when they originate from different synoptic situations. Nine commonly-used copula types were fitted to the flood peak - flood volume pairs at each site. In this step, two databases were used: i) a process-based selection of all the independent flood events (three data samples at each catchment) and ii) the annual maxima of the flood peaks and the respective flood volumes regardless of the flood processes (one data sample per catchment). The goodness-of-fit of the nine copula types was examined on a regional basis throughout all the catchments. It was concluded that (1) the copula models for the flood processes are discernible locally; (2) the Clayton copula provides an unacceptable performance for all three processes as well as in the case of the annual maxima; (3) the rejection of the other copula types depends on the flood type and the sample size; (4) there are differences in the copulas with the best fits: for synoptic and flash floods, the best performance is associated with the extreme value copulas; for snowmelt floods, the Frank copula fits the best; while in the case of the annual maxima, no firm conclusion could be made due to the number of copulas with similarly acceptable overall performances. The general conclusion from this case study is that treating flood processes separately is beneficial; however, the usually available sample size in such real life studies is not sufficient to give generally valid recommendations for engineering design tasks.
The Petersen-Lincoln estimator and its extension to estimate the size of a shared population.
Chao, Anne; Pan, H-Y; Chiang, Shu-Chuan
2008-12-01
The Petersen-Lincoln estimator has been used to estimate the size of a population in a single mark release experiment. However, the estimator is not valid when the capture sample and recapture sample are not independent. We provide an intuitive interpretation for "independence" between samples based on 2 x 2 categorical data formed by capture/non-capture in each of the two samples. From the interpretation, we review a general measure of "dependence" and quantify the correlation bias of the Petersen-Lincoln estimator when two types of dependences (local list dependence and heterogeneity of capture probability) exist. An important implication in the census undercount problem is that instead of using a post enumeration sample to assess the undercount of a census, one should conduct a prior enumeration sample to avoid correlation bias. We extend the Petersen-Lincoln method to the case of two populations. This new estimator of the size of the shared population is proposed and its variance is derived. We discuss a special case where the correlation bias of the proposed estimator due to dependence between samples vanishes. The proposed method is applied to a study of the relapse rate of illicit drug use in Taiwan. ((c) 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim).
Specific absorption and backscatter coefficient signatures in southeastern Atlantic coastal waters
NASA Astrophysics Data System (ADS)
Bostater, Charles R., Jr.
1998-12-01
Measurements of natural water samples in the field and laboratory of hyperspectral signatures of total absorption and reflectance were obtained using long pathlength absorption systems (50 cm pathlength). Water was sampled in Indian River Lagoon, Banana River and Port Canaveral, Florida. Stations were also occupied in near coastal waters out to the edge of the Gulf Stream in the vicinity of Kennedy Space Center, Florida and estuarine waters along Port Royal Sound and along the Beaufort River tidal area in South Carolina. The measurements were utilized to calculate natural water specific absorption, total backscatter and specific backscatter optical signatures. The resulting optical cross section signatures suggest different models are needed for the different water types and that the common linear model may only appropriate for coastal and oceanic water types. Mean particle size estimates based on the optical cross section, suggest as expected, that particle size of oceanic particles are smaller than more turbid water types. The data discussed and presented are necessary for remote sensing applications of sensors as well as for development and inversion of remote sensing algorithms.
Development of a magnetic lab-on-a-chip for point-of-care sepsis diagnosis
NASA Astrophysics Data System (ADS)
Schotter, Joerg; Shoshi, Astrit; Brueckl, Hubert
2009-05-01
We present design criteria, operation principles and experimental examples of magnetic marker manipulation for our magnetic lab-on-a-chip prototype. It incorporates both magnetic sample preparation and detection by embedded GMR-type magnetoresistive sensors and is optimized for the automated point-of-care detection of four different sepsis-indicative cytokines directly from about 5 μl of whole blood. The sample volume, magnetic particle size and cytokine concentration determine the microfluidic volume, sensor size and dimensioning of the magnetic gradient field generators. By optimizing these parameters to the specific diagnostic task, best performance is expected with respect to sensitivity, analysis time and reproducibility.
Acceptor-modulated optical enhancements and band-gap narrowing in ZnO thin films
NASA Astrophysics Data System (ADS)
Hassan, Ali; Jin, Yuhua; Irfan, Muhammad; Jiang, Yijian
2018-03-01
Fermi-Dirac distribution for doped semiconductors and Burstein-Moss effect have been correlated first time to figure out the conductivity type of ZnO. Hall Effect in the Van der Pauw configuration has been applied to reconcile our theoretical estimations which evince our assumption. Band-gap narrowing has been found in all p-type samples, whereas blue Burstein-Moss shift has been recorded in the n-type films. Atomic Force Microscopic (AFM) analysis shows that both p-type and n-type films have almost same granular-like structure with minor change in average grain size (˜ 6 nm to 10 nm) and surface roughness rms value 3 nm for thickness ˜315 nm which points that grain size and surface roughness did not play any significant role in order to modulate the conductivity type of ZnO. X-ray diffraction (XRD), Energy Dispersive X-ray Spectroscopy (EDS) and X-ray Photoelectron Spectroscopy (XPS) have been employed to perform the structural, chemical and elemental analysis. Hexagonal wurtzite structure has been observed in all samples. The introduction of nitrogen reduces the crystallinity of host lattice. 97% transmittance in the visible range with 1.4 × 107 Ω-1cm-1 optical conductivity have been detected. High absorption value in the ultra-violet (UV) region reveals that NZOs thin films can be used to fabricate next-generation high-performance UV detectors.
Survival analysis and classification methods for forest fire size
2018-01-01
Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at “being held” (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at “being held” exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances. PMID:29320497
Survival analysis and classification methods for forest fire size.
Tremblay, Pier-Olivier; Duchesne, Thierry; Cumming, Steven G
2018-01-01
Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at "being held" (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at "being held" exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances.
NASA Astrophysics Data System (ADS)
Yoon, Yongmin; Im, Myungshin; Kim, Jae-Woo
2017-01-01
Under the Λ cold dark matter (ΛCDM) cosmological models, massive galaxies are expected to be larger in denser environments through frequent hierarchical mergers with other galaxies. Yet, observational studies of low-redshift early-type galaxies have shown no such trend, standing as a puzzle to solve during the past decade. We analyzed 73,116 early-type galaxies at 0.1 ≤ z < 0.15, adopting a robust nonparametric size measurement technique and extending the analysis to many massive galaxies. We find for the first time that local early-type galaxies heavier than 1011.2 M⊙ show a clear environmental dependence in mass-size relation, in such a way that galaxies are as much as 20%-40% larger in the densest environments than in underdense environments. Splitting the sample into the brightest cluster galaxies (BCGs) and non-BCGs does not affect the result. This result agrees with the ΛCDM cosmological simulations and suggests that mergers played a significant role in the growth of massive galaxies in dense environments as expected in theory.
Soft γ-ray selected radio galaxies: favouring giant size discovery
NASA Astrophysics Data System (ADS)
Bassani, L.; Venturi, T.; Molina, M.; Malizia, A.; Dallacasa, D.; Panessa, F.; Bazzano, A.; Ubertini, P.
2016-09-01
Using the recent INTEGRAL/IBIS and Swift/BAT surveys we have extracted a sample of 64 confirmed plus three candidate radio galaxies selected in the soft gamma-ray band. The sample covers all optical classes and is dominated by objects showing a Fanaroff-Riley type II radio morphology; a large fraction (70 per cent) of the sample is made of `radiative mode' or high-excitation radio galaxies. We measured the source size on images from the NRAO VLA Sky Survey, the Faint Images of the Radio Sky at twenty-cm and the Sydney University Molonglo Sky Survey images and have compared our findings with data in the literature obtaining a good match. We surprisingly found that the soft gamma-ray selection favours the detection of large size radio galaxies: 60 per cent of objects in the sample have size greater than 0.4 Mpc while around 22 per cent reach dimension above 0.7 Mpc at which point they are classified as giant radio galaxies (GRGs), the largest and most energetic single entities in the Universe. Their fraction among soft gamma-ray selected radio galaxies is significantly larger than typically found in radio surveys, where only a few per cent of objects (1-6 per cent) are GRGs. This may partly be due to observational biases affecting radio surveys more than soft gamma-ray surveys, thus disfavouring the detection of GRGs at lower frequencies. The main reasons and/or conditions leading to the formation of these large radio structures are still unclear with many parameters such as high jet power, long activity time and surrounding environment all playing a role; the first two may be linked to the type of active galactic nucleus discussed in this work and partly explain the high fraction of GRGs found in the present sample. Our result suggests that high energy surveys may be a more efficient way than radio surveys to find these peculiar objects.
Reduction Behavior of Assmang and Comilog ore in the SiMn Process
NASA Astrophysics Data System (ADS)
Kim, Pyunghwa Peace; Holtan, Joakim; Tangstad, Merete
The reduction behavior of raw materials from Assmang and Comilog based charges were experimentally investigated with CO gas up to 1600 °C. Quartz, HC FeMn slag or limestone were added to Assmang or Comilog according to the SiMn production charge, and mass loss results were obtained by using a TGA furnace. The results showed that particle size, type of manganese ore and mixture have close relationship to the reduction behavior of raw materials during MnO and SiO2 reduction. The influence of particle size to mass loss was apparent when Assmang or Comilog was mixed with only coke (FeMn) while it became insignificant when quartz and HC FeMn slag (SiMn) were added. This implied that quartz and HC FeMn slag had favored the incipient slag formation regardless of particle size. This explained the similar mass loss tendencies of SiMn charge samples between 1200-1500 °C, contrary to FeMn charge samples where different particle sizes showed significant difference in mass loss. Also, while FeMn charge samples showed progressive mass loss, SiMn charge samples showed diminutive mass loss until 1500 °C. However, rapid mass losses were observed with SiMn charge samples in this study above 1500 °C, and they have occurred at different temperatures. This implied rapid reduction of MnO and SiO2 and the type of ore and addition of HC FeMn slag have significant influence determining these temperatures. The temperatures observed for the rapid mass loss were approximately 1503 °C (Quartz and HC FeMn slag addition in Assmang), 1543 °C (Quartz addition in Assmang) and 1580-1587 °C (Quartz and limestone addition in Comilog), respectively. These temperatures also showed indications of possible SiMn production at process temperatures lower than 1550 °C.
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappellari, Michele
2013-11-20
The distribution of galaxies on the mass-size plane as a function of redshift or environment is a powerful test for galaxy formation models. Here we use integral-field stellar kinematics to interpret the variation of the mass-size distribution in two galaxy samples spanning extreme environmental densities. The samples are both identically and nearly mass-selected (stellar mass M {sub *} ≳ 6 × 10{sup 9} M {sub ☉}) and volume-limited. The first consists of nearby field galaxies from the ATLAS{sup 3D} parent sample. The second consists of galaxies in the Coma Cluster (Abell 1656), one of the densest environments for which good, resolvedmore » spectroscopy can be obtained. The mass-size distribution in the dense environment differs from the field one in two ways: (1) spiral galaxies are replaced by bulge-dominated disk-like fast-rotator early-type galaxies (ETGs), which follow the same mass-size relation and have the same mass distribution as in the field sample; (2) the slow-rotator ETGs are segregated in mass from the fast rotators, with their size increasing proportionally to their mass. A transition between the two processes appears around the stellar mass M {sub crit} ≈ 2 × 10{sup 11} M {sub ☉}. We interpret this as evidence for bulge growth (outside-in evolution) and bulge-related environmental quenching dominating at low masses, with little influence from merging. In contrast, significant dry mergers (inside-out evolution) and halo-related quenching drives the mass and size growth at the high-mass end. The existence of these two processes naturally explains the diverse size evolution of galaxies of different masses and the separability of mass and environmental quenching.« less
NASA Astrophysics Data System (ADS)
Weinkauf, Manuel F. G.; Milker, Yvonne
2018-05-01
Benthic Foraminifera assemblages are employed for past environmental reconstructions, as well as for biomonitoring studies in recent environments. Despite their established status for such applications, and existing protocols for sample treatment, not all studies using benthic Foraminifera employ the same methodology. For instance, there is no broad practical consensus whether to use the >125 µm or >150 µm size fraction for benthic foraminiferal assemblage analyses. Here, we use early Pleistocene material from the Pefka E section on the Island of Rhodes (Greece), which has been counted in both size fractions, to investigate whether a 25 µm difference in the counted fraction is already sufficient to have an impact on ecological studies. We analysed the influence of the difference in size fraction on studies of biodiversity as well as multivariate assemblage analyses of the sample material. We found that for both types of studies, the general trends remain the same regardless of the chosen size fraction, but in detail significant differences emerge which are not consistently distributed between samples. Studies which require a high degree of precision can thus not compare results from analyses that used different size fractions, and the inconsistent distribution of differences makes it impossible to develop corrections for this issue. We therefore advocate the consistent use of the >125 µm size fraction for benthic foraminiferal studies in the future.
Cracks and nanodroplets produced on tungsten surface samples by dense plasma jets
NASA Astrophysics Data System (ADS)
Ticoş, C. M.; Galaţanu, M.; Galaţanu, A.; Luculescu, C.; Scurtu, A.; Udrea, N.; Ticoş, D.; Dumitru, M.
2018-03-01
Small samples of 12.5 mm in diameter made from pure tungsten were exposed to a dense plasma jet produced by a coaxial plasma gun operated at 2 kJ. The surface of the samples was analyzed using a scanning electron microscope (SEM) before and after applying consecutive plasma shots. Cracks and craters were produced in the surface due to surface tensions during plasma heating. Nanodroplets and micron size droplets could be observed on the samples surface. An energy-dispersive spectroscopy (EDS) analysis revealed that the composition of these droplets coincided with that of the gun electrode material. Four types of samples were prepared by spark plasma sintering from powders with the average particle size ranging from 70 nanometers up to 80 μm. The plasma power load to the sample surface was estimated to be ≈4.7 MJ m-2 s-1/2 per shot. The electron temperature and density in the plasma jet had peak values 17 eV and 1.6 × 1022 m-3, respectively.
NASA Astrophysics Data System (ADS)
Pijarowski, Piotr Marek; Tic, Wilhelm Jan
2014-06-01
A research on diatomite sorbents was carried out to investigate their ability to remove hazardous substances from oil spillages. We used two types of sorbents available on the market with differences in material density and particles size of composition. As sorbents we used Ekoterm oil and unleaded petrol 95 coming from refinery PKN Orlen S.A. Two types of sorbents with similar chemical composition but different granulometric composition were used. They are marked as D1 and C1 samples. The fastest absorbent was C1, but D1 sample was the most absorptive.
Effects of Test Level Discrimination and Difficulty on Answer-Copying Indices
ERIC Educational Resources Information Center
Sunbul, Onder; Yormaz, Seha
2018-01-01
In this study Type I Error and the power rates of omega (?) and GBT (generalized binomial test) indices were investigated for several nominal alpha levels and for 40 and 80-item test lengths with 10,000-examinee sample size under several test level restrictions. As a result, Type I error rates of both indices were found to be below the acceptable…
The 11 micron Silicon Carbide Feature in Carbon Star Shells
NASA Technical Reports Server (NTRS)
Speck, A. K.; Barlow, M. J.; Skinner, C. J.
1996-01-01
Silicon carbide (SiC) is known to form in circumstellar shells around carbon stars. SiC can come in two basic types - hexagonal alpha-SiC or cubic beta-SiC. Laboratory studies have shown that both types of SiC exhibit an emission feature in the 11-11.5 micron region, the size and shape of the feature varying with type, size and shape of the SiC grains. Such a feature can be seen in the spectra of carbon stars. Silicon carbide grains have also been found in meteorites. The aim of the current work is to identity the type(s) of SiC found in circumstellar shells and how they might relate to meteoritic SiC samples. We have used the CGS3 spectrometer at the 3.8 m UKIRT to obtain 7.5-13.5 micron spectra of 31 definite or proposed carbon stars. After flux-calibration, each spectrum was fitted using a chi(exp 2)-minimisation routine equipped with the published laboratory optical constants of six different samples of small SiC particles, together with the ability to fit the underlying continuum using a range of grain emissivity laws. It was found that the majority of observed SiC emission features could only be fitted by alpha-SiC grains. The lack of beta-SiC is surprising, as this is the form most commonly found in meteorites. Included in the sample were four sources, all of which have been proposed to be carbon stars, that appear to show the SiC feature in absorption.
Recommendations for the use of mist nets for inventory and monitoring of bird populations
C. John Ralph; Erica H. Dunn; Will J. Peach; Colleen M. Handel
2004-01-01
We provide recommendations on the best practices for mist netting for the purposes of monitoring population parameters such as abundance and demography. Studies should be carefully thought out before nets are set up, to ensure that sampling design and estimated sample size will allow study objectives to be met. Station location, number of nets, type of nets, net...
Mumford, Jeanette A.
2017-01-01
Even after thorough preprocessing and a careful time series analysis of functional magnetic resonance imaging (fMRI) data, artifact and other issues can lead to violations of the assumption that the variance is constant across subjects in the group level model. This is especially concerning when modeling a continuous covariate at the group level, as the slope is easily biased by outliers. Various models have been proposed to deal with outliers including models that use the first level variance or that use the group level residual magnitude to differentially weight subjects. The most typically used robust regression, implementing a robust estimator of the regression slope, has been previously studied in the context of fMRI studies and was found to perform well in some scenarios, but a loss of Type I error control can occur for some outlier settings. A second type of robust regression using a heteroscedastic autocorrelation consistent (HAC) estimator, which produces robust slope and variance estimates has been shown to perform well, with better Type I error control, but with large sample sizes (500–1000 subjects). The Type I error control with smaller sample sizes has not been studied in this model and has not been compared to other modeling approaches that handle outliers such as FSL’s Flame 1 and FSL’s outlier de-weighting. Focusing on group level inference with a continuous covariate over a range of sample sizes and degree of heteroscedasticity, which can be driven either by the within- or between-subject variability, both styles of robust regression are compared to ordinary least squares (OLS), FSL’s Flame 1, Flame 1 with outlier de-weighting algorithm and Kendall’s Tau. Additionally, subject omission using the Cook’s Distance measure with OLS and nonparametric inference with the OLS statistic are studied. Pros and cons of these models as well as general strategies for detecting outliers in data and taking precaution to avoid inflated Type I error rates are discussed. PMID:28030782
Adjusting for multiple prognostic factors in the analysis of randomised trials
2013-01-01
Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large sample sizes, however treating strata as random effects should be the analysis method of choice with binary or time-to-event outcomes and a small sample size. PMID:23898993
Effects of normalization on quantitative traits in association test
2009-01-01
Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414
Characterizing temporal changes of agricultural particulate matter number concentrations
NASA Astrophysics Data System (ADS)
Docekal, G. P.; Mahmood, R.; Larkin, G. P.; Silva, P. J.
2017-12-01
It is widely accepted among literature that particulate matter (PM) are of detriment to human health and the environment as a whole. These effects can vary depending on the particle size. This study examines PM size distributions and number concentrations at a poultry house. Despite much literature on PM concentrations at agricultural facilities, few studies have looked at the size distribution of particles at such facilities from the nucleation up through the coarse mode. Two optical particle counters (OPCs) were placed, one inside of a chicken house, and one on the outside of an exhaust fan to determine particle size distributions. In addition, a scanning mobility particle sizer (SMPS) and aerodynamic particle sizer (APS) sampled poultry house particles to give sizing information from a full size range of 10 nm - 20 mm. The data collected show several different types of events where observed size distributions changed. While some of these are due to expected dust generation events producing coarse mode particles, others suggest particle nucleation and accumulation events at the smaller size ranges that also occurred. The data suggest that agricultural facilities have an impact one the presence of PM in the environment beyond just generation of coarse mode dust. Data for different types of size distribution changes observed will be discussed.
Impact of particle size on distribution and human exposure of flame retardants in indoor dust.
He, Rui-Wen; Li, Yun-Zi; Xiang, Ping; Li, Chao; Cui, Xin-Yi; Ma, Lena Q
2018-04-01
The effect of dust particle size on the distribution and bioaccessibility of flame retardants (FRs) in indoor dust remains unclear. In this study, we analyzed 20 FRs (including 6 organophosphate flame retardants (OPFRs), 8 polybrominated diphenyl ethers (PBDEs), 4 novel brominated flame retardants (NBFRs), and 2 dechlorane plus (DPs)) in composite dust samples from offices, public microenvironments (PME), and cars in Nanjing, China. Each composite sample (one per microenvironment) was separated into 6 size fractions (F1-F6: 200-2000µm, 150-200µm, 100-150µm, 63-100µm, 43-63µm, and <43µm). FRs concentrations were the highest in car dust, being 16 and 6 times higher than those in offices and PME. The distribution of FRs in different size fractions was Kow-dependent and affected by surface area (Log Kow=1-4), total organic carbon (Log Kow=4-9), and FR migration pathways into dust (Log Kow>9). Bioaccessibility of FRs was measured by the physiologically-based extraction test, with OPFR bioaccessibility being 1.8-82% while bioaccessible PBDEs, NBFRs, and DPs were under detection limits due to their high hydrophobicity. The OPFR bioaccessibility in 200-2000µm fraction was significantly higher than that of <43µm fraction, but with no difference among the other four fractions. Risk assessment was performed for the most abundant OPFR-tris(2-chloroethyl) phosphate. The average daily dose (ADD) values were the highest for the <43µm fraction for all three types of dust using total concentrations, but no consistent trend was found among the three types of dust if based on bioaccessible concentrations. Our results indicated that dust size impacted human exposure estimation of FRs due to their variability in distribution and bioaccessibility among different fractions. For future risk assessment, size selection for dust sampling should be standardized and bioaccessibility of FRs should not be overlooked. Copyright © 2018 Elsevier Inc. All rights reserved.
“Magnitude-based Inference”: A Statistical Review
Welsh, Alan H.; Knight, Emma J.
2015-01-01
ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387
Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H
2017-02-01
We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.
[The effect of notch's angle and depth on crack propagation of zirconia ceramics].
Chen, Qingya; Chen, Xinmin
2012-10-01
This paper is aimed to study the effect of notch's angle and depth on crack propagation of zirconia ceramics. We fabricated cuboid-shaped zirconia ceramics samples with the standard sizes of 4. 4 mm x 2. 2 mm x 18 mm for the experiments, divided the samples into 6 groups, and prepared notches on these samples with different angles and depth. We placed the samples with loads until they were broke, and observe the fracture curve of each sample. We then drew coordinates and described the points of the fracture curve under a microscope, and made curve fitting by the software-Origin. When the notch angle beta = 90 degrees, the crack propagation is pure type I; when beta = 60 degrees, the crack propagation is mainly type I; and when beta = 30 degrees, the crack propagation is a compound of type I and type III. With the increasing of the notch depth, the effect of notch angles on crack propagation increases. In addition, Notch angle is a very important fracture mechanics parameter for crack propagation of zirconia ceramics. With the increasing of notch depth, the impact of notch angle increases.
Liu, Jingxia; Colditz, Graham A
2018-05-01
There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimizing larval assessment to support sea lamprey control in the Great Lakes
Hansen, Michael J.; Adams, Jean V.; Cuddy, Douglas W.; Richards, Jessica M.; Fodale, Michael F.; Larson, Geraldine L.; Ollila, Dale J.; Slade, Jeffrey W.; Steeves, Todd B.; Young, Robert J.; Zerrenner, Adam
2003-01-01
Elements of the larval sea lamprey (Petromyzon marinus) assessment program that most strongly influence the chemical treatment program were analyzed, including selection of streams for larval surveys, allocation of sampling effort among stream reaches, allocation of sampling effort among habitat types, estimation of daily growth rates, and estimation of metamorphosis rates, to determine how uncertainty in each element influenced the stream selection program. First, the stream selection model based on current larval assessment sampling protocol significantly underestimated transforming sea lam-prey abundance, transforming sea lampreys killed, and marginal costs per sea lamprey killed, compared to a protocol that included more years of data (especially for large streams). Second, larval density in streams varied significantly with Type-I habitat area, but not with total area or reach length. Third, the ratio of larval density between Type-I and Type-II habitat varied significantly among streams, and that the optimal allocation of sampling effort varied with the proportion of habitat types and variability of larval density within each habitat. Fourth, mean length varied significantly among streams and years. Last, size at metamorphosis varied more among years than within or among regions and that metamorphosis varied significantly among streams within regions. Study results indicate that: (1) the stream selection model should be used to identify streams with potentially high residual populations of larval sea lampreys; (2) larval sampling in Type-II habitat should be initiated in all streams by increasing sampling in Type-II habitat to 50% of the sampling effort in Type-I habitat; and (3) methods should be investigated to reduce uncertainty in estimates of sea lamprey production, with emphasis on those that reduce the uncertainty associated with larval length at the end of the growing season and those used to predict metamorphosis.
Primary and Aggregate Size Distributions of PM in Tail Pipe Emissions form Diesel Engines
NASA Astrophysics Data System (ADS)
Arai, Masataka; Amagai, Kenji; Nakaji, Takayuki; Hayashi, Shinji
Particulate matter (PM) emission exhausted from diesel engine should be reduced to keep the clean air environment. PM emission was considered that it consisted of coarse and aggregate particles, and nuclei-mode particles of which diameter was less than 50nm. However the detail characteristics about these particles of the PM were still unknown and they were needed for more physically accurate measurement and more effective reduction of exhaust PM emission. In this study, the size distributions of solid particles in PM emission were reported. PMs in the tail-pipe emission were sampled from three type diesel engines. Sampled PM was chemically treated to separate the solid carbon fraction from other fractions such as soluble organic fraction (SOF). The electron microscopic and optical-manual size measurement procedures were used to determine the size distribution of primary particles those were formed through coagulation process from nuclei-mode particles and consisted in aggregate particles. The centrifugal sedimentation method was applied to measure the Stokes diameter of dry-soot. Aerodynamic diameters of nano and aggregate particles were measured with scanning mobility particle sizer (SMPS). The peak aggregate diameters detected by SMPS were fallen in the same size regime as the Stokes diameter of dry-soot. Both of primary and Stokes diameters of dry-soot decreased with increases of engine speed and excess air ratio. Also, the effects of fuel properties and engine types on primary and aggregate particle diameters were discussed.
Comparative measurements using different particle size instruments
NASA Technical Reports Server (NTRS)
Chigier, N.
1984-01-01
This paper discusses the measurement and comparison of particle size and velocity measurements in sprays. The general nature of sprays and the development of standard, consistent research sprays are described. The instruments considered in this paper are: pulsed laser photography, holography, television, and cinematography; laser anemometry and interferometry using visibility, peak amplitude, and intensity ratioing; and laser diffraction. Calibration is by graticule, reticle, powders with known size distributions in liquid cells, monosize sprays, and, eventually, standard sprays. Statistical analyses including spatial and temporal long-time averaging as well as high-frequency response time histories with conditional sampling are examined. Previous attempts at comparing instruments, the making of simultaneous or consecutive measurements with similar types and different types of imaging, interferometric, and diffraction instruments are reviewed. A program of calibration and experiments for comparing and assessing different instruments is presented.
Quantal Response: Estimation and Inference
2014-09-01
considered. The CI-based test is just another way of looking at the Wald test. A small-sample simulation illustrates aberrant behavior of the Wald/CI...asymptotic power computation (Eq. 36) exhibits this behavior but not to such an extent as the simulated small-sample power. Sample size is n = 11 and...as |m1−m0| increases, but the power of the Wald test actually decreases for large |m1−m0| and eventually π → α . This type of behavior was reported as
Hayabusa2 Sampler: Collection of Asteroidal Surface Material
NASA Astrophysics Data System (ADS)
Sawada, Hirotaka; Okazaki, Ryuji; Tachibana, Shogo; Sakamoto, Kanako; Takano, Yoshinori; Okamoto, Chisato; Yano, Hajime; Miura, Yayoi; Abe, Masanao; Hasegawa, Sunao; Noguchi, Takaaki
2017-07-01
Japan Aerospace Exploration Agency (JAXA) launched the asteroid exploration probe "Hayabusa2" in December 3rd, 2014, following the 1st Hayabusa mission. With technological and scientific improvements from the Hayabusa probe, we plan to visit the C-type asteroid 162137 Ryugu (1999 JU3), and to sample surface materials of the C-type asteroid that is likely to be different from the S-type asteroid Itokawa and contain more pristine materials, including organic matter and/or hydrated minerals, than S-type asteroids. We developed the Hayabusa2 sampler to collect a minimum of 100 mg of surface samples including several mm-sized particles at three surface locations without any severe terrestrial contamination. The basic configuration of the sampler design is mainly as same as the 1st Hayabusa (Yano et al. in Science, 312(5778):1350-1353, 2006), with several minor but important modifications based on lessons learned from the Hayabusa to fulfill the scientific requirements and to raise the scientific value of the returned samples.
Drop size distributions and related properties of fog for five locations measured from aircraft
NASA Technical Reports Server (NTRS)
Zak, J. Allen
1994-01-01
Fog drop size distributions were collected from aircraft as part of the Synthetic Vision Technology Demonstration Program. Three west coast marine advection fogs, one frontal fog, and a radiation fog were sampled from the top of the cloud to the bottom as the aircraft descended on a 3-degree glideslope. Drop size versus altitude versus concentration are shown in three dimensional plots for each 10-meter altitude interval from 1-minute samples. Also shown are median volume radius and liquid water content. Advection fogs contained the largest drops with median volume radius of 5-8 micrometers, although the drop sizes in the radiation fog were also large just above the runway surface. Liquid water content increased with height, and the total number of drops generally increased with time. Multimodal variations in number density and particle size were noted in most samples where there was a peak concentration of small drops (2-5 micrometers) at low altitudes, midaltitude peak of drops 5-11 micrometers, and high-altitude peak of the larger drops (11-15 micrometers and above). These observations are compared with others and corroborate previous results in fog gross properties, although there is considerable variation with time and altitude even in the same type of fog.
MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes
NASA Technical Reports Server (NTRS)
Allen, Carlton C.
2011-01-01
The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by mass of each sample should be preserved to support future scientific investigations. Samples of 15-16 grams are considered optimal. The total mass of returned rocks, soils, blanks and standards should be approximately 500 grams. Atmospheric gas samples should be the equivalent of 50 cubic cm at 20 times Mars ambient atmospheric pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yano, Michael; Kriek, Mariska; Wel, Arjen van der
We present the relation between galaxy structure and spectral type, using a K-selected galaxy sample at 0.5 < z < 2.0. Based on similarities between the UV-to-NIR spectral energy distributions (SEDs), we classify galaxies into 32 spectral types. The different types span a wide range in evolutionary phases, and thus—in combination with available CANDELS/F160W imaging—are ideal to study the structural evolution of galaxies. Effective radii (R{sub e}) and Sérsic parameters (n) have been measured for 572 individual galaxies, and for each type, we determine R{sub e} at fixed stellar mass by correcting for the mass-size relation. We use the rest-frame U − V versus V − J diagrammore » to investigate evolutionary trends. When moving into the direction perpendicular to the star-forming sequence, in which we see the Hα equivalent width and the specific star formation rate (sSFR) decrease, we find a decrease in R{sub e} and an increase in n. On the quiescent sequence we find an opposite trend, with older redder galaxies being larger. When splitting the sample into redshift bins, we find that young post-starburst galaxies are most prevalent at z > 1.5 and significantly smaller than all other galaxy types at the same redshift. This result suggests that the suppression of star formation may be associated with significant structural evolution at z > 1.5. At z < 1, galaxy types with intermediate sSFRs (10{sup −11.5}–10{sup −10.5} yr{sup −1}) do not have post-starburst SED shapes. These galaxies have similar sizes as older quiescent galaxies, implying that they can passively evolve onto the quiescent sequence, without increasing the average size of the quiescent galaxy population.« less
Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T
2007-01-01
The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.
Two-sample binary phase 2 trials with low type I error and low sample size.
Litwin, Samuel; Basickes, Stanley; Ross, Eric A
2017-04-30
We address design of two-stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1 > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E⩾m, with two-sample rules of the form E - C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.
Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai
2014-12-18
A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies
2014-01-01
Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686
NASA Astrophysics Data System (ADS)
Davis, Cabell S.; Wiebe, Peter H.
1985-01-01
Macrozooplankton size structure and taxonomic composition in warm-core ring 82B was examined from a time series (March, April, June) of ring center MOCNESS (1 m) samples. Size distributions of 15 major taxonomic groups were determined from length measurements digitized from silhouette photographs of the samples. Silhouette digitization allows rapid quantification of Zooplankton size structure and taxonomic composition. Length/weight regressions, determined for each taxon, were used to partition the biomass (displacement volumes) of each sample among the major taxonomic groups. Zooplankton taxonomic composition and size structure varied with depth and appeared to coincide with the hydrographic structure of the ring. In March and April, within the thermostad region of the ring, smaller herbivorous/omnivorous Zooplankton, including copepods, crustacean larvae, and euphausiids, were dominant, whereas below this region, larger carnivores, such as medusae, ctenophores, fish, and decapods, dominated. Copepods were generally dominant in most samples above 500 m. Total macrozooplankton abundance and biomass increased between March and April, primarily because of increases in herbivorous taxa, including copepods, crustacean larvae, and larvaceans. A marked increase in total macrozooplankton abundance and biomass between April and June was characterized by an equally dramatic shift from smaller herbivores (1.0-3.0 mm) in April to large herbivores (5.0-6.0 mm) and carnivores (>15 mm) in June. Species identifications made directly from the samples suggest that changes in trophic structure resulted from seeding type immigration and subsequent in situ population growth of Slope Water zooplankton species.
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.
Kottas, Martina; Kuss, Oliver; Zapf, Antonia
2014-02-19
The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.
Measurement of the bed material of gravel-bed rivers
Milhous, R.T.; ,
2002-01-01
The measurement of the physical properties of a gravel-bed river is important in the calculation of sediment transport and physical habitat values for aquatic animals. These properties are not always easy to measure. One recent report on flushing of fines from the Klamath River did not contain information on one location because the grain size distribution of the armour could not be measured on a dry river bar. The grain size distribution could have been measured using a barrel sampler and converting the measurements to the same as would have been measured if a dry bar existed at the site. In another recent paper the porosity was calculated from an average value relation from the literature. The results of that paper may be sensitive to the actual value of porosity. Using the bulk density sampling technique based on a water displacement process presented in this paper the porosity could have been calculated from the measured bulk density. The principle topics of this paper are the measurement of the size distribution of the armour, and measurement of the porosity of the substrate. The 'standard' method of sampling of the armour is to do a Wolman-type count of the armour on a dry section of the river bed. When a dry bar does not exist the armour in an area of the wet streambed is to sample and the measurements transformed analytically to the same type of results that would have been obtained from the standard Wolman procedure. A comparison of the results for the San Miguel River in Colorado shows significant differences in the median size of the armour. The method use to determine the porosity is not 'high-tech' and there is a need improve knowledge of the porosity because of the importance of porosity in the aquatic ecosystem. The technique is to measure the in-situ volume of a substrate sample by measuring the volume of a frame over the substrate and then repeated the volume measurement after the sample is obtained from within the frame. The difference in the volumes is the volume of the sample.
Hewett, P
1995-02-01
Particle size distributions were measured for fumes from mild steel (MS) and stainless steel (SS); shielded metal arc welding (SMAW) and gas metal arc welding (GMAW) consumables. Up to six samples of each type of fume were collected in a test chamber using a micro-orifice uniform deposit (cascade) impactor. Bulk samples were collected for bulk fume density and specific surface area analysis. Additional impactor samples were collected using polycarbonate substrates and analyzed for elemental content. The parameters of the underlying mass distributions were estimated using a nonlinear least squares analysis method that fits a smooth curve to the mass fraction distribution histograms of all samples for each type of fume. The mass distributions for all four consumables were unimodal and well described by a lognormal distribution; with the exception of the GMAW-MS and GMAW-SS comparison, they were statistically different. The estimated mass distribution geometric means for the SMAW-MS and SMAW-SS consumables were 0.59 and 0.46 micron aerodynamic equivalent diameter (AED), respectively, and 0.25 micron AED for both the GMAW-MS and GMAW-SS consumables. The bulk fume densities and specific surface areas were similar for the SMAW-MS and SMAW-SS consumables and for the GMAW-MS and GMAW-SS consumables, but differed between SMAW and GMAW. The distribution of metals was similar to the mass distributions. Particle size distributions and physical properties of the fumes were considerably different when categorized by welding method. Within each welding method there was little difference between MS and SS fumes.
Optical-NIR dust extinction towards Galactic O stars
NASA Astrophysics Data System (ADS)
Maíz Apellániz, J.; Barbá, R. H.
2018-05-01
Context. O stars are excellent tracers of the intervening ISM because of their high luminosity, blue intrinsic SED, and relatively featureless spectra. We are currently conducting the Galactic O-Star Spectroscopic Survey (GOSSS), which is generating a large sample of O stars with accurate spectral types within several kpc of the Sun. Aims: We aim to obtain a global picture of the properties of dust extinction in the solar neighborhood based on optical-NIR photometry of O stars with accurate spectral types. Methods: We have processed a carefully selected photometric set with the CHORIZOS code to measure the amount [E(4405 - 5495)] and type [R5495] of extinction towards 562 O-type stellar systems. We have tested three different families of extinction laws and analyzed our results with the help of additional archival data. Results: The Maíz Apellániz et al. (2014, A&A, 564, A63) family of extinction laws provides a better description of Galactic dust that either the Cardelli et al. (1989, ApJ, 345, 245) or Fitzpatrick (1999, PASP, 111, 63) families, so it should be preferentially used when analysing samples similar to the one in this paper. In many cases O stars and late-type stars experience similar amounts of extinction at similar distances but some O stars are located close to the molecular clouds left over from their births and have larger extinctions than the average for nearby late-type populations. In qualitative terms, O stars experience a more diverse extinction than late-type stars, as some are affected by the small-grain-size, low-R5495 effect of molecular clouds and others by the large-grain-size, high-R5495 effect of H II regions. Late-type stars experience a narrower range of grain sizes or R5495, as their extinction is predominantly caused by the average, diffuse ISM. We propose that the reason for the existence of large-grain-size, high-R5495 regions in the ISM in the form of H II regions and hot-gas bubbles is the selective destruction of small dust grains by EUV photons and possibly by thermal sputtering by atoms or ions. Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/613/A9
Sabol, Thomas A.; Topping, David J.
2013-01-01
Accurate measurements of suspended-sediment concentration require suspended-sediment samplers to operate isokinetically, within an intake-efficiency range of 1.0 ± 0.10, where intake efficiency is defined as the ratio of the velocity of the water through the sampler intake to the local ambient stream velocity. Local ambient stream velocity is defined as the velocity of the water in the river at the location of the nozzle, unaffected by the presence of the sampler. Results from Federal Interagency Sedimentation Project (FISP) laboratory experiments published in the early 1940s show that when the intake efficiency is less than 1.0, suspended-sediment samplers tend to oversample sediment relative to water, leading to potentially large positive biases in suspended-sediment concentration that are positively correlated with grain size. Conversely, these experiments show that, when the intake efficiency is greater than 1.0, suspended‑sediment samplers tend to undersample sediment relative to water, leading to smaller negative biases in suspended-sediment concentration that become slightly more negative as grain size increases. The majority of FISP sampler development and testing since the early 1990s has been conducted under highly uniform flow conditions via flume and slack-water tow tests, with relatively little work conducted under the greater levels of turbulence that exist in actual rivers. Additionally, all of this recent work has been focused on the hydraulic characteristics and intake efficiencies of these samplers, with no field investigations conducted on the accuracy of the suspended-sediment data collected with these samplers. When depth-integrating suspended-sediment samplers are deployed under the more nonuniform and turbulent conditions that exist in rivers, multiple factors may contribute to departures from isokinetic sampling, thus introducing errors into the suspended-sediment data collected by these samplers that may not be predictable on the basis of flume and tow tests alone. This study has three interrelated goals. First, the intake efficiencies of the older US D-77 bag-type and newer, FISP-approved US D-96-type1 depth-integrating suspended‑sediment samplers are evaluated at multiple cross‑sections under a range of actual-river conditions. The intake efficiencies measured in these actual-river tests are then compared to those previously measured in flume and tow tests. Second, other physical effects, mainly water temperature and the duration of sampling at a vertical, are examined to determine whether these effects can help explain observed differences in intake efficiency both between the two types of samplers and between the laboratory and field tests. Third, the signs and magnitudes of the likely errors in suspendedsand concentration in measurements made with both types of samplers are predicted based the intake efficiencies of these two types of depth-integrating samplers. Using the relative difference in isokinetic sampling observed between the US D-77 bag-type and D-96-type samplers during river tests, measured differences in suspended-sediment concentration in a variety of size classes were evaluated between paired equal-discharge-increment (EDI) and equal-width-increment (EWI) measurements made with these two types of samplers to determine whether these differences in concentration are consistent with the differences in concentrations expected on the basis of the 1940s FISP laboratory experiments. In addition, sequential single-vertical depth-integrated samples were collected (concurrent with velocity measurements) with the US D-96-type bag sampler and two different rigidcontainer samplers to evaluate whether the predicted errors in suspended-sand concentrations measured with the US D-96- type sampler are consistent with those expected on the basis of the 1940s FISP laboratory experiments. Results from our study indicate that the intake efficiency of the US D-96-type sampler is superior to that of the US D-77 bag-type sampler under actual-river conditions, with overall performance of the US D-96-type sampler being closer to, yet still typically below, the FISP-acceptable range of isokinetic operation. These results are in contrast to the results from FISP-conducted flume tests that showed that both the US D-77 bag-type and US D-96-type samplers sampled isokinetically in the laboratory. Results from our study indicate that the single largest problem with the behavior of both the US D-77 bag-type and the US D-96-type samplers under actual‑river conditions is that both samplers are prone to large time‑dependent decreases in intake efficiency as sampling duration increases. In the case of the US D-96-type sampler, this problem may be at least partially overcome by shortening the duration of sampling (or, instead, perhaps by a simple design improvement); in the case of the US D-77 bag-type sampler, although shortening the sampling duration improves the intake efficiency, it does not bring it into agreement with the FISP‑accepted range of isokinetic operation. The predicted errors in suspended-sand concentration in EDI or EWI measurements made with the US-96-type sampler are much smaller than those associated with EDI or EWI measurements made with the US D-77 bag-type sampler, especially when the results are corrected for the effects of water temperature and sampling duration. The bias in the concentration in each size class measured using the US D-77 bag-type relative to the concentration measured using the US D-96-type sampler behaves in a manner consistent with that expected on the basis of the observed differences in intake efficiency between the two samplers in conjunction with the results from the 1940s FISP laboratory experiments. In addition, the bias in the concentration in each size class measured using the US D-96‑type sampler relative to the concentration measured using the truly isokinetic rigid-container samplers is in excellent agreement with that predicted on the basis of the 1940s FISP laboratory experiments. Because suspended-sediment samplers can respond differently between laboratory and field conditions, actual-river tests such as those in this study should be conducted when models of suspended-sediment samplers are changed from one type to another during the course of long-term monitoring programs. Otherwise, potential large differences in the suspended-sediment data collected by different types of samplers would lead to large step changes in sediment loads that may be misinterpreted as real, when, in fact, they are associated with the change in suspended‑sediment sampling equipment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rather, Sami ullah, E-mail: rathersami@gmail.com
2014-12-15
Graphical abstract: X-ray diffraction (XRD) pattern of magnesium nanoparticles synthesized by solution reduction method with and without TOPO. - Highlights: • Simple and convenient method of preparing Mg nanoparticles. • Characterized by XRD, SEM, FESEM and TEM. • Trioctylphosphine oxide offers a greater control over the size of the particles. • Hydrogen uptake of samples at different temperatures and pressure of 4.5 MPa. - Abstract: Facile and simple, surfactant-mediated solution reduction method was used to synthesize monodisperse magnesium nanoparticles. Little amount of magnesium oxide nanoparticles were also formed due to the presence of TOPO and easy oxidation of magnesium, eventhough,more » all precautions were taken to avoid oxidation of the sample. Precise size control of particles was achieved by carefully varying the concentration ratio of two different types of surfactants, – trioctylphosphine oxide and hexadecylamine. Recrystallized magnesium nanoparticle samples with and without TOPO were analyzed by X-ray diffraction, scanning electron microscope, field emission scanning electron microscope, and transmission electron microscope. The peak diameters of particles were estimated from size distribution analysis of the morphological data. The particles synthesized in the presence and absence of TOPO found to have diameters 46.5 and 34.8 nm, respectively. This observed dependence of particle size on the presence of TOPO offers a convenient method to control the particle size by simply using appropriate surfactant concentrations. Exceptional enhancement in hydrogen uptake and kinetics in synthesized magnesium nanoparticles as compared to commercial magnesium sample was due to the smaller particle size and improved morphology. Overall hydrogen uptake not affected by the little variation in particle size with and without TOPO.« less
Ginn, G O
1990-01-01
Changes in strategies of hospitals responding to the turbulent health care environment of the 1980s are examined both in the aggregate and from the perspective of the individual hospital. The Miles and Snow typology is used to determine strategy type. Both investor-owned and not-for-profit hospitals were well represented in the broad mix of hospital types sampled. In addition, freestanding hospitals and members of multihospital systems were present in the sample. Last, hospitals of all sizes were included. Strategic change was evaluated by classifying hospitals by strategy type in each of two consecutive five-year time periods (1976 through 1980 and 1981 through 1985). Changes in reimbursement policies, the emergence of new technologies, changing consumer expectations, and new sources of competition made the environment for hospitals progressively more turbulent in the latter period and provided an opportune setting to evaluate strategic change. Results showed that a significant number of hospitals did change strategy as the environment changed, and in the direction anticipated. Logistic regression was used to determine whether prior strategy, type of ownership, system membership, or size would predict which hospitals would change strategy as the environment changed: only prior strategy was found to be a predictor of strategy change. PMID:2211128
Ginn, G O
1990-10-01
Changes in strategies of hospitals responding to the turbulent health care environment of the 1980s are examined both in the aggregate and from the perspective of the individual hospital. The Miles and Snow typology is used to determine strategy type. Both investor-owned and not-for-profit hospitals were well represented in the broad mix of hospital types sampled. In addition, freestanding hospitals and members of multihospital systems were present in the sample. Last, hospitals of all sizes were included. Strategic change was evaluated by classifying hospitals by strategy type in each of two consecutive five-year time periods (1976 through 1980 and 1981 through 1985). Changes in reimbursement policies, the emergence of new technologies, changing consumer expectations, and new sources of competition made the environment for hospitals progressively more turbulent in the latter period and provided an opportune setting to evaluate strategic change. Results showed that a significant number of hospitals did change strategy as the environment changed, and in the direction anticipated. Logistic regression was used to determine whether prior strategy, type of ownership, system membership, or size would predict which hospitals would change strategy as the environment changed: only prior strategy was found to be a predictor of strategy change.
Fat content in individual muscle fibers of lean and obese subjects.
Malenfant, P; Joanisse, D R; Thériault, R; Goodpaster, B H; Kelley, D E; Simoneau, J A
2001-09-01
To examine skeletal muscle intracellular triglyceride concentration in different fiber types in relation to obesity. Skeletal muscle fiber type distribution and intracellular lipid content were measured in vastus lateralis samples obtained by needle biopsy from lean and obese individuals. Seven lean controls (body mass index (BMI) 23.0+/-3.3 kg/m(2); mean+/-s.d.) and 14 obese (BMI 33.7+/-2.7 kg/m(2)) individuals; both groups included comparable proportions of men and women. Samples were histochemically stained for the identification of muscle fiber types (myosin ATPase) and intracellular lipid aggregates (oil red O dye). The number and size of fat aggregates as well as their concentration within type I, IIA and IIB muscle fiber types were measured. The cellular distribution of the lipid aggregates was also examined. The size of fat aggregates was not affected by obesity but the number of lipid droplets within muscle fibers was twice as abundant in obese compared to lean individuals. This was seen in type I (298+/-135 vs 129+/-75; obese vs lean, P<0.05), IIA (132+/-67 vs 79+/-29; P<0.05), and IIB (103+/-63 vs 51+/-13; P<0.05) muscle fibers. A more central distribution of lipid droplets was observed in muscle fibers of obese compared to lean subjects (27.2+/-5.7 vs 19.7+/-6.4%; P<0.05). The higher number of lipid aggregates and the disposition to a greater central distribution in all fiber types in obesity indicate important changes in lipid metabolism and/or storage that are fiber type-independent.
Chagas, Aline Garcia da Rosa; Spinelli, Eliani; Fiaux, Sorele Batista; Barreto, Adriana da Silva; Rodrigues, Silvana Vianna
2017-08-01
Different types of hair were submitted to different milling procedures and their resulting powders were analyzed by scanning electron microscopy (SEM) and laser diffraction (LD). SEM results were qualitative whereas LD results were quantitative and accurately characterized the hair powders through their particle size distribution (PSD). Different types of hair were submitted to an optimized milling conditions and their PSD was quite similar. A good correlation was obtained between PSD results and ketamine concentration in a hair sample analyzed by LC-MS/MS. Hair samples were frozen in liquid nitrogen for 5min and pulverized at 25Hz for 10min, resulting in 61% of particles <104μm and 39% from 104 to 1000μm. Doing so, a 359% increment on ketamine concentration was obtained for an authentic sample extracted after pulverization comparing with the same sample cut in 1mm fragments. When milling time was extended to 25min, >90% of particles were <60μm and an additional increment of 52.4% in ketamine content was obtained. PSD is a key feature on analysis of pulverized hair as it can affect the method recovery and reproducibility. In addition, PSD is an important issue on sample retesting and quality control procedures. Copyright © 2017 Elsevier B.V. All rights reserved.
Testing the non-unity of rate ratio under inverse sampling.
Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing
2007-08-01
Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Nawata, Kengo
2014-06-01
Despite the widespread popular belief in Japan about a relationship between personality and ABO blood type, this association has not been empirically substantiated. This study provides more robust evidence that there is no relationship between blood type and personality, through a secondary analysis of large-scale survey data. Recent data (after 2000) were collected using large-scale random sampling from over 10,000 people in total from both Japan and the US. Effect sizes were calculated. Japanese datasets from 2004 (N = 2,878-2,938), and 2,005 (N = 3,618-3,692) as well as one dataset from the US in 2004 (N = 3,037-3,092) were used. In all the datasets, 65 of 68 items yielded non-significant differences between blood groups. Effect sizes (eta2) were less than .003. This means that blood type explained less than 0.3% of the total variance in personality. These results show the non-relevance of blood type for personality.
Alterations of intrinsic tongue muscle properties with aging.
Cullins, Miranda J; Connor, Nadine P
2017-12-01
Age-related decline in the intrinsic lingual musculature could contribute to swallowing disorders, yet the effects of age on these muscles is unknown. We hypothesized there is reduced muscle fiber size and shifts to slower myosin heavy chain (MyHC) fiber types with age. Intrinsic lingual muscles were sampled from 8 young adult (9 months) and 8 old (32 months) Fischer 344/Brown Norway rats. Fiber size and MyHC were determined by fluorescent immunohistochemistry. Age was associated with a reduced number of rapidly contracting muscle fibers, and more slowly contracting fibers. Decreased fiber size was found only in the transverse and verticalis muscles. Shifts in muscle composition from faster to slower MyHC fiber types may contribute to age-related changes in swallowing duration. Decreasing muscle fiber size in the protrusive transverse and verticalis muscles may contribute to reductions in maximum isometric tongue pressure found with age. Differences among regions and muscles may be associated with different functional demands. Muscle Nerve 56: E119-E125, 2017. © 2017 Wiley Periodicals, Inc.
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
Risky Sexual Behavior and Substance Use among Adolescents: A Meta-analysis
Ritchwood, Tiarney D.; Ford, Haley; DeCoster, Jamie; Sutton, Marnie; Lochman, John E.
2015-01-01
This study presents the results of a meta-analysis of the association between substance use and risky sexual behavior among adolescents. 87 studies fit the inclusion criteria, containing a total of 104 independent effect sizes that incorporated more than 120,000 participants. The overall effect size for the relationship between substance use and risky sexual behavior was in the small to moderate range (r = .22, CI = .18, .26). Further analyses indicated that the effect sizes did not substantially vary across the type of substance use, but did substantially vary across the type of risky sexual behavior being assessed. Specifically, mean effect sizes were smallest for studies examining unprotected sex (r = .15, CI = .10, .20), followed by studies examining number of sexual partners (r = .25, CI = .21, .30), those examining composite measures of risky sexual behavior (r = .38, CI = .27, .48), and those examining sex with an intravenous drug user (r = .53, CI = .45, .60). Furthermore, our results revealed that the relationship between drug use and risky sexual behavior is moderated by several variables, including sex, ethnicity, sexuality, age, sample type, and level of measurement. Implications and future directions are discussed. PMID:25825550
Discovery sequence and the nature of low permeability gas accumulations
Attanasi, E.D.
2005-01-01
There is an ongoing discussion regarding the geologic nature of accumulations that host gas in low-permeability sandstone environments. This note examines the discovery sequence of the accumulations in low permeability sandstone plays that were classified as continuous-type by the U.S. Geological Survey for the 1995 National Oil and Gas Assessment. It compares the statistical character of historical discovery sequences of accumulations associated with continuous-type sandstone gas plays to those of conventional plays. The seven sandstone plays with sufficient data exhibit declining size with sequence order, on average, and in three of the seven the trend is statistically significant. Simulation experiments show that both a skewed endowment size distribution and a discovery process that mimics sampling proportional to size are necessary to generate a discovery sequence that consistently produces a statistically significant negative size order relationship. The empirical findings suggest that discovery sequence could be used to constrain assessed gas in untested areas. The plays examined represent 134 of the 265 trillion cubic feet of recoverable gas assessed in undeveloped areas of continuous-type gas plays in low permeability sandstone environments reported in the 1995 National Assessment. ?? 2005 International Association for Mathematical Geology.
Kim, Kyoung-Min; Choi, Mun-Hyoung; Lee, Jong-Kwon; Jeong, Jayoung; Kim, Yu-Ri; Kim, Meyoung-Kon; Paek, Seung-Min; Oh, Jae-Min
2014-01-01
In this study, four types of standardized ZnO nanoparticles were prepared for assessment of their potential biological risk. Powder-phased ZnO nanoparticles with different particle sizes (20 nm and 100 nm) were coated with citrate or L-serine to induce a negative or positive surface charge, respectively. The four types of coated ZnO nanoparticles were subjected to physicochemical evaluation according to the guidelines published by the Organisation for Economic Cooperation and Development. All four samples had a well crystallized Wurtzite phase, with particle sizes of ∼30 nm and ∼70 nm after coating with organic molecules. The coating agents were determined to have attached to the ZnO surfaces through either electrostatic interaction or partial coordination bonding. Electrokinetic measurements showed that the surface charges of the ZnO nanoparticles were successfully modified to be negative (about −40 mV) or positive (about +25 mV). Although all the four types of ZnO nanoparticles showed some agglomeration when suspended in water according to dynamic light scattering analysis, they had clearly distinguishable particle size and surface charge parameters and well defined physicochemical properties. PMID:25565825
NASA Astrophysics Data System (ADS)
Vogler, Daniel; Walsh, Stuart D. C.; Bayer, Peter; Amann, Florian
2017-11-01
This work studies the roughness characteristics of fracture surfaces from a crystalline rock by analyzing differences in surface roughness between fractures of various types and sizes. We compare the surface properties of natural fractures sampled in situ and artificial (i.e., man-made) fractures created in the same source rock under laboratory conditions. The topography of the various fracture types is compared and characterized using a range of different measures of surface roughness. Both natural and artificial, and tensile and shear fractures are considered, along with the effects of specimen size on both the geometry of the fracture and its surface characterization. The analysis shows that fracture characteristics are substantially different between natural shear and artificial tensile fractures, while natural tensile fracture often spans the whole result domain of the two other fracture types. Specimen size effects are also evident, not only as scale sensitivity in the roughness metrics, but also as a by-product of the physical processes used to generate the fractures. Results from fractures generated with Brazilian tests show that fracture roughness at small scales differentiates fractures from different specimen sizes and stresses at failure.
Probing the Magnetic Causes of CMEs: Free Magnetic Energy More Important Than Either Size Or Twist
NASA Technical Reports Server (NTRS)
Falconer, D. A.; Moore, R. L.; Gary, G. A.
2006-01-01
To probe the magnetic causes of CMEs, we have examined three types of magnetic measures: size, twist and total nonpotentiality (or total free magnetic energy) of an active region. Total nonpotentiality is roughly the product of size times twist. For predominately bipolar active regions, we have found that total nonpotentiality measures have the strongest correlation with future CME productivity (approx. 75% prediction success rate), while size and twist measures each have a weaker correlation with future CME productivity (approx. 65% prediction success rate) (Falconer, Moore, & Gary, ApJ, 644, 2006). For multipolar active regions, we find that the CME-prediction success rates for total nonpotentiality and size are about the same as for bipolar active regions. We also find that the size measure correlation with CME productivity is nearly all due to the contribution of size to total nonpotentiality. We have a total nonpotentiality measure that can be obtained from a line-of-sight magnetogram of the active region and that is as strongly correlated with CME productivity as are any of our total-nonpotentiality measures from deprojected vector magnetograms. We plan to further expand our sample by using MDI magnetograms of each active region in our sample to determine its total nonpotentiality and size on each day that the active region was within 30 deg. of disk center. The resulting increase in sample size will improve our statistics and allow us to investigate whether the nonpotentiality threshold for CME production is nearly the same or significantly different for multipolar regions than for bipolar regions. In addition, we will investigate the time rates of change of size and total nonpotentiality as additional causes of CME productivity.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Freeze-cast alumina pore networks: Effects of freezing conditions and dispersion medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, S. M.; Xiao, X.; Faber, K. T.
Alumina ceramics were freeze-cast from water- and camphene-based slurries under varying freezing conditions and examined using X-ray computed tomography (XCT). Pore network characteristics, i.e., porosity, pore size, geometric surface area, and tortuosity, were measured from XCT reconstructions and the data were used to develop a model to predict feature size from processing conditions. Classical solidification theory was used to examine relationships between pore size, temperature gradients, and freezing front velocity. Freezing front velocity was subsequently predicted from casting conditions via the two-phase Stefan problem. Resulting models for water-based samples agreed with solidification-based theories predicting lamellar spacing of binary eutectic alloys,more » and models for camphene-based samples concurred with those for dendritic growth. Relationships between freezing conditions and geometric surface area were also modeled by considering the inverse relationship between pore size and surface area. Tortuosity was determined to be dependent primarily on the type of dispersion medium. (C) 2015 Elsevier Ltd. All rights reserved.« less
Efficiency of a new bioaerosol sampler in sampling Betula pollen for antigen analyses.
Rantio-Lehtimäki, A; Kauppinen, E; Koivikko, A
1987-01-01
A new bioaerosol sampler consisting of Liu-type atmospheric aerosol sampling inlet, coarse particle inertial impactor, two-stage high-efficiency virtual impactor (aerodynamic particle sizes respectively in diameter: greater than or equal to 8 microns, 8-2.5 microns, and 2.5 microns; sampling on filters) and a liquid-cooled condenser was designed, fabricated and field-tested in sampling birch (Betula) pollen grains and smaller particles containing Betula antigens. Both microscopical (pollen counts) and immunochemical (enzyme-linked immunosorbent assay) analyses of each stage were carried out. The new sampler was significantly more efficient than Burkard trap e.g. in sampling particles of Betula pollen size (ca. 25 microns in diameter). This was prominent during pollen peak periods (e.g. May 19th, 1985, in the virtual impactor 9482 and in the Burkard trap 2540 Betula p.g. X m-3 of air). Betula antigens were detected also in filter stages where no intact pollen grains were found; in the condenser unit the antigen concentrations instead were very low.
Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne
2017-02-28
This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Herbold, E. B.; Nesterenko, V. F.; Benson, D. J.; Cai, J.; Vecchio, K. S.; Jiang, F.; Addiss, J. W.; Walley, S. M.; Proud, W. G.
2008-11-01
The variation of metallic particle size and sample porosity significantly alters the dynamic mechanical properties of high density granular composite materials processed using a cold isostatically pressed mixture of polytetrafluoroethylene (PTFE), aluminum (Al), and tungsten (W) powders. Quasistatic and dynamic experiments are performed with identical constituent mass fractions with variations in the size of the W particles and pressing conditions. The relatively weak polymer matrix allows the strength and fracture modes of this material to be governed by the granular type behavior of agglomerated metal particles. A higher ultimate compressive strength was observed in relatively high porosity samples with small W particles compared to those with coarse W particles in all experiments. Mesoscale granular force chains of the metallic particles explain this unusual phenomenon as observed in hydrocode simulations of a drop-weight test. Macrocracks forming below the critical failure strain for the matrix and unusual behavior due to a competition between densification and fracture in dynamic tests of porous samples were also observed. Numerical modeling of shock loading of this granular composite material demonstrated that the internal energy, specifically thermal energy, of the soft PTFE matrix can be tailored by the W particle size distribution.
Catch of channel catfish with tandem-set hoop nets and gill nets in lentic systems of Nebraska
Richters, Lindsey K.; Pope, Kevin L.
2011-01-01
Twenty-six Nebraska water bodies representing two ecosystem types (small standing waters and large standing waters) were surveyed during 2008 and 2009 with tandem-set hoop nets and experimental gill nets to determine if similar trends existed in catch rates and size structures of channel catfish Ictalurus punctatus captured with these gears. Gear efficiency was assessed as the number of sets (nets) that would be required to capture 100 channel catfish given observed catch per unit effort (CPUE). Efficiency of gill nets was not correlated with efficiency of hoop nets for capturing channel catfish. Small sample sizes prohibited estimation of proportional size distributions in most surveys; in the four surveys for which sample size was sufficient to quantify length-frequency distributions of captured channel catfish, distributions differed between gears. The CPUE of channel catfish did not differ between small and large water bodies for either gear. While catch rates of hoop nets were lower than rates recorded in previous studies, this gear was more efficient than gill nets at capturing channel catfish. However, comparisons of size structure between gears may be problematic.
Swimsuit issues: promoting positive body image in young women's magazines.
Boyd, Elizabeth Reid; Moncrieff-Boyd, Jessica
2011-08-01
This preliminary study reviews the promotion of healthy body image to young Australian women, following the 2009 introduction of the voluntary Industry Code of Conduct on Body Image. The Code includes using diverse sized models in magazines. A qualitative content analysis of the 2010 annual 'swimsuit issues' was conducted on 10 Australian young women's magazines. Pictorial and/or textual editorial evidence of promoting diverse body shapes and sizes was regarded as indicative of the magazines' upholding aspects of the voluntary Code of Conduct for Body Image. Diverse sized models were incorporated in four of the seven magazines with swimsuit features sampled. Body size differentials were presented as part of the swimsuit features in three of the magazines sampled. Tips for diverse body type enhancement were included in four of the magazines. All magazines met at least one criterion. One magazine displayed evidence of all three criteria. Preliminary examination suggests that more than half of young women's magazines are upholding elements of the voluntary Code of Conduct for Body Image, through representation of diverse-sized women in their swimsuit issues.
Optical properties of graphene nanoflakes: Shape matters.
Mansilla Wettstein, Candela; Bonafé, Franco P; Oviedo, M Belén; Sánchez, Cristián G
2016-06-14
In recent years there has been significant debate on whether the edge type of graphene nanoflakes (GNFs) or graphene quantum dots (GQDs) are relevant for their electronic structure, thermal stability, and optical properties. Using computer simulations, we have proven that there is a fundamental difference in the absorption spectra between samples of the same shape, similar size but different edge type, namely, armchair or zigzag edges. These can be explained by the presence of electronic structures near the Fermi level which are localized on the edges. These features are also evident from the dependence of band gap on the GNF size, which shows three very distinct trends for different shapes and edge geometries.
Optical properties of graphene nanoflakes: Shape matters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansilla Wettstein, Candela; Bonafé, Franco P.; Sánchez, Cristián G., E-mail: cgsanchez@fcq.unc.edu.ar
In recent years there has been significant debate on whether the edge type of graphene nanoflakes (GNFs) or graphene quantum dots (GQDs) are relevant for their electronic structure, thermal stability, and optical properties. Using computer simulations, we have proven that there is a fundamental difference in the absorption spectra between samples of the same shape, similar size but different edge type, namely, armchair or zigzag edges. These can be explained by the presence of electronic structures near the Fermi level which are localized on the edges. These features are also evident from the dependence of band gap on the GNFmore » size, which shows three very distinct trends for different shapes and edge geometries.« less
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
Zafra, C A; Temprano, J; Tejero, I
2011-07-01
The heavy metal pollution caused by road run-off water constitutes a problem in urban areas. The metallic load associated with road sediment must be determined in order to study its impact in drainage systems and receiving waters, and to perfect the design of prevention systems. This paper presents data regarding the sediment collected on road surfaces in the city of Torrelavega (northern Spain) during a period of 65 days (132 samples). Two sample types were collected: vacuum-dried samples and those swept up following vacuuming. The sediment loading (g m(-2)), particle size distribution (63-2800 microm) and heavy metal concentrations were determined. The data showed that the concentration of heavy metals tends to increase with the reduction in the particle diameter (exponential tendency). The concentrations ofPb, Zn, Cu, Cr, Ni, Cd, Fe, Mn and Co in the size fraction <63 microm were 350, 630, 124, 57, 56, 38, 3231, 374 and 51 mg kg(-1), respectively (average traffic density: 3800 vehicles day(-1)). By increasing the residence time of the sediment, the concentration increases, whereas the ratio of the concentration between the different size fractions decreases. The concentration across the road diminishes when the distance between the roadway and the sampling siteincreases; when the distance increases, the ratio between size fractions for heavy metal concentrations increases. Finally, the main sources of heavy metals are the particles detached by braking (brake pads) and tyre wear (rubber), and are associated with particle sizes <125 microm.
Proteome Profiles of Digested Products of Commercial Meat Sources
Li, Li; Liu, Yuan; Zhou, Guanghong; Xu, Xinglian; Li, Chunbao
2017-01-01
This study was designed to characterize in vitro-digested products of proteins from four commercial meat products, including dry-cured ham, cooked ham, emulsion-type sausage, and dry-cured sausage. The samples were homogenized and incubated with pepsin and trypsin. The digestibility and particle sizes of digested products were measured. Nano-LC–MS/MS was applied to characterize peptides. The results showed the highest digestibility and the lowest particle size in dry-cured ham (P < 0.05), while the opposite was for cooked ham (P < 0.05). Nano-LC–MS/MS analysis revealed that dry-cured ham samples had the greatest number of 750–3,500 Da Mw peptides in pepsin-digested products. In the digested products of cooked ham and emulsion-type sausage, a lot of peptides were matched with soy protein that was added in the formulations. In addition, protein oxidation was also observed in different meat products. Our findings give an insight into nutritional values of different meat products. PMID:28396857
Non-symmetrical electric response in CaCu3Ti4O12 and La0.05Ba0.95TiO3-δ-SPS materials
NASA Astrophysics Data System (ADS)
Valdez-Nava, Zarel; Dinculescu, Sorin; Lebey, Thierry
2010-09-01
Two colossal dielectric permittivity (CDC) materials, CaCu3Ti4O12 (CCTO) issued from conventional sintering with grain sizes between 20 and 30 µm and SPS sintered La0.05Ba0.95TiO3-δ (BTL-SPS) with grain sizes between 50 and 100 nm, are characterized by simple electrical measurements (Sawyer-Tower and I(V)). Whatever the type of measurements performed, the results depend, on the one hand, on the relative position of the sample in the measuring setup and, on the other hand, on the type of surface treatment achieved on the sample. A clear demonstration of the non-isotropic character of the materials under study is achieved. The non-symmetrical electrical response observed in these two different materials seems to be independent of microstructure and composition, and could be related to the overall phenomena at the origin of the colossal values of permittivity.
Dielectric Characteristics and Microwave Absorption of Graphene Composite Materials
Rubrice, Kevin; Castel, Xavier; Himdi, Mohamed; Parneix, Patrick
2016-01-01
Nowadays, many types of materials are elaborated for microwave absorption applications. Carbon-based nanoparticles belong to these types of materials. Among these, graphene presents some distinctive features for electromagnetic radiation absorption and thus microwave isolation applications. In this paper, the dielectric characteristics and microwave absorption properties of epoxy resin loaded with graphene particles are presented from 2 GHz to 18 GHz. The influence of various parameters such as particle size (3 µm, 6–8 µm, and 15 µm) and weight ratio (from 5% to 25%) are presented, studied, and discussed. The sample loaded with the smallest graphene size (3 µm) and the highest weight ratio (25%) exhibits high loss tangent (tanδ = 0.36) and a middle dielectric constant ε′ = 12–14 in the 8–10 GHz frequency range. As expected, this sample also provides the highest absorption level: from 5 dB/cm at 4 GHz to 16 dB/cm at 18 GHz. PMID:28773948
Nevárez-Martínez, Manuel O; Balmori-Ramírez, Alejandro; Miranda-Mier, Everardo; Santos-Molina, J Pablo; Méndez-Tenorio, Francisco J; Cervantes-Valle, Celio
2008-09-01
We analyzed the performance of three traps for marine fish between October 2005 and August 2006 in the Gulf of California, Mexico. The performance was measured as difference in selectivity, fish diversity, size structure and yield. The samples were collected with quadrangular traps 90 cm wide, 120 cm long and 50 cm high. Trap type 1 had a 5 x 5 cm mesh (type 2: 5 x 5 cm including a rear panel of 5 x 10 cm; trap 3: 5 x 10 cm). Most abundant in our traps were: Goldspotted sand bass (Paralabrax auroguttatus), Ocean whitefish (Caulolatilus princeps), Spotted sand bass (P. maculatofaciatus) and Bighead tilefish (C. affinis); there was no bycatch. The number offish per trap per haul decreased when mesh size was increased. We also observed a direct relationship between mesh size and average fish length. By comparing our traps with the authorized fishing gear (hooks-and-line) we found that the size structure is larger in traps. Traps with larger mesh size were more selective. Consequently, we recommend adding traps to hooks-and-line as authorized fishing gear in the small scale fisheries of the Sonora coast, Mexico.
Sample size calculations for the design of cluster randomized trials: A summary of methodology.
Gao, Fei; Earnest, Arul; Matchar, David B; Campbell, Michael J; Machin, David
2015-05-01
Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included. Copyright © 2015 Elsevier Inc. All rights reserved.
Ukai, Hirohiko; Ohashi, Fumiko; Samoto, Hajime; Fukui, Yoshinari; Okamoto, Satoru; Moriguchi, Jiro; Ezaki, Takafumi; Takada, Shiro; Ikeda, Masayuki
2006-04-01
The present study was initiated to examine the relationship between the workplace concentrations and the estimated highest concentrations in solvent workplaces (SWPs), with special references to enterprise size and types of solvent work. Results of survey conducted in 1010 SWPs in 156 enterprises were taken as a database. Workplace air was sampled at > or = 5 crosses in each SWP following a grid sampling strategy. An additional air was grab-sampled at the site where the worker's exposure was estimated to be highest (estimated highest concentration or EHC). The samples were analyzed for 47 solvents designated by regulation, and solvent concentrations in each sample were summed up by use of additiveness formula. From the workplace concentrations at > or = 5 points, geometric mean and geometric standard deviations were calculated as the representative workplace concentration (RWC) and the indicator of variation in workplace concentration (VWC). Comparison between RWC and EHC in the total of 1010 SWPs showed that EHC was 1.2 (in large enterprises with>300 employees) to 1.7 times [in small to medium (SM) enterprises with < or = 300 employees] greater than RWC. When SWPs were classified into SM enterprises and large enterprises, both RWC and EHC were significantly higher in SM enterprises than in large enterprises. Further comparison by types of solvent work showed that the difference was more marked in printing, surface coating and degreasing/cleaning/wiping SWPs, whereas it was less remarkable in painting SWPs and essentially nil in testing/research laboratories. In conclusion, the present observation as discussed in reference to previous publications suggests that RWC, EHC and the ratio of EHC/WRC varies substantially among different types of solvent work as well as enterprise size, and are typically higher in printing SWPs in SM enterprises.
The Importance and Role of Intracluster Correlations in Planning Cluster Trials
Preisser, John S.; Reboussin, Beth A.; Song, Eun-Young; Wolfson, Mark
2008-01-01
There is increasing recognition of the critical role of intracluster correlations of health behavior outcomes in cluster intervention trials. This study examines the estimation, reporting, and use of intracluster correlations in planning cluster trials. We use an estimating equations approach to estimate the intracluster correlations corresponding to the multiple-time-point nested cross-sectional design. Sample size formulae incorporating 2 types of intracluster correlations are examined for the purpose of planning future trials. The traditional intracluster correlation is the correlation among individuals within the same community at a specific time point. A second type is the correlation among individuals within the same community at different time points. For a “time × condition” analysis of a pretest–posttest nested cross-sectional trial design, we show that statistical power considerations based upon a posttest-only design generally are not an adequate substitute for sample size calculations that incorporate both types of intracluster correlations. Estimation, reporting, and use of intracluster correlations are illustrated for several dichotomous measures related to underage drinking collected as part of a large nonrandomized trial to enforce underage drinking laws in the United States from 1998 to 2004. PMID:17879427
Puls, Robert W.; Eychaner, James H.; Powell, Robert M.
1996-01-01
Investigations at Pinal Creek, Arizona, evaluated routine sampling procedures for determination of aqueous inorganic geochemistry and assessment of contaminant transport by colloidal mobility. Sampling variables included pump type and flow rate, collection under air or nitrogen, and filter pore diameter. During well purging and sample collection, suspended particle size and number as well as dissolved oxygen, temperature, specific conductance, pH, and redox potential were monitored. Laboratory analyses of both unfiltered samples and the filtrates were performed by inductively coupled argon plasma, atomic absorption with graphite furnace, and ion chromatography. Scanning electron microscopy with Energy Dispersive X-ray was also used for analysis of filter particulates. Suspended particle counts consistently required approximately twice as long as the other field-monitored indicators to stabilize. High-flow-rate pumps entrained normally nonmobile particles. Difference in elemental concentrations using different filter-pore sizes were generally not large with only two wells having differences greater than 10 percent in most wells. Similar differences (>10%) were observed for some wells when samples were collected under nitrogen rather than in air. Fe2+/Fe3+ ratios for air-collected samples were smaller than for samples collected under a nitrogen atmosphere, reflecting sampling-induced oxidation.
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
NASA Astrophysics Data System (ADS)
Buerki, Peter R.; Gaelli, Brigitte C.; Nyffeler, Urs P.
In central Switzerland five types of emission sources are mainly responsible for airborne trace metals: traffic, industrial plants burning heavy oil, resuspension of soil particles, residential heatings and refuse incineration plants. The particulate emissions of each of these source types except refuse incineration were sampled using Berner impactors and the mass and elemental size distributions of Cd, Cu, Mn, Pb, Zn, As and Na determined. Cd, Na and Zn are not characteristic for any of these source types. As and Cu, occurring in the fine particle fractions are characteristic for heavy oil combustion, Mn for soil dust and sometimes for heavy and fuel oil combustion and Pb for traffic aerosols. The mass size distributions of aerosols originating from erosion and abrasion processes show a maximum mass fraction in the coarse particle range larger than about 1 μm aerodynamic equivalent diameters (A.E.D.). Aerosols originating from combustion processes show a second maximum mass fraction in the fine particle range below about 0.5μm A.E.D. Scanning electron microscopy combined with an EDS analyzer was used for the morphological characterization of emission and ambient aerosols.
NASA Technical Reports Server (NTRS)
Kane, R. D.; Petrovic, J. J.; Ebert, L. J.
1975-01-01
Techniques are evaluated for chemical, electrochemical, and thermal etching of thoria dispersed (TD) nickel alloys. An electrochemical etch is described which yielded good results only for large grain sizes of TD-nickel. Two types of thermal etches are assessed for TD-nickel: an oxidation etch and vacuum annealing of a polished specimen to produce an etch. It is shown that the first etch was somewhat dependent on sample orientation with respect to the processing direction, the second technique was not sensitive to specimen orientation or grain size, and neither method appear to alter the innate grain structure when the materials were fully annealed prior to etching. An electrochemical etch is described which was used to observe the microstructures in TD-NiCr, and a thermal-oxidation etch is shown to produce better detail of grain boundaries and to have excellent etching behavior over the entire range of grain sizes of the sample.
Melvin, Elizabeth M; Moore, Brandon R; Gilchrist, Kristin H; Grego, Sonia; Velev, Orlin D
2011-09-01
The recent development of microfluidic "lab on a chip" devices requiring sample sizes <100 μL has given rise to the need to concentrate dilute samples and trap analytes, especially for surface-based detection techniques. We demonstrate a particle collection device capable of concentrating micron-sized particles in a predetermined area by combining AC electroosmosis (ACEO) and dielectrophoresis (DEP). The planar asymmetric electrode pattern uses ACEO pumping to induce equal, quadrilateral flow directed towards a stagnant region in the center of the device. A number of system parameters affecting particle collection efficiency were investigated including electrode and gap width, chamber height, applied potential and frequency, and number of repeating electrode pairs and electrode geometry. The robustness of the on-chip collection design was evaluated against varying electrolyte concentrations, particle types, and particle sizes. These devices are amenable to integration with a variety of detection techniques such as optical evanescent waveguide sensing.
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
NASA Astrophysics Data System (ADS)
Li, Hongyu; Mao, Shude; Cappellari, Michele; Ge, Junqiang; Long, R. J.; Li, Ran; Mo, H. J.; Li, Cheng; Zheng, Zheng; Bundy, Kevin; Thomas, Daniel; Brownstein, Joel R.; Roman Lopes, Alexandre; Law, David R.; Drory, Niv
2018-05-01
We perform full spectrum fitting stellar population analysis and Jeans Anisotropic modelling of the stellar kinematics for about 2000 early-type galaxies (ETGs) and spiral galaxies from the MaNGA DR14 sample. Galaxies with different morphologies are found to be located on a remarkably tight mass plane which is close to the prediction of the virial theorem, extending previous results for ETGs. By examining an inclined projection (`the mass-size' plane), we find that spiral and early-type galaxies occupy different regions on the plane, and their stellar population properties (i.e. age, metallicity, and stellar mass-to-light ratio) vary systematically along roughly the direction of velocity dispersion, which is a proxy for the bulge fraction. Galaxies with higher velocity dispersions have typically older ages, larger stellar mass-to-light ratios and are more metal rich, which indicates that galaxies increase their bulge fractions as their stellar populations age and become enriched chemically. The age and stellar mass-to-light ratio gradients for low-mass galaxies in our sample tend to be positive (centre < outer), while the gradients for most massive galaxies are negative. The metallicity gradients show a clear peak around velocity dispersion log10 σe ≈ 2.0, which corresponds to the critical mass ˜3 × 1010 M⊙ of the break in the mass-size relation. Spiral galaxies with large mass and size have the steepest gradients, while the most massive ETGs, especially above the critical mass Mcrit ≳ 2 × 1011 M⊙, where slow rotator ETGs start dominating, have much flatter gradients. This may be due to differences in their evolution histories, e.g. mergers.
Fazey, Francesca M C; Ryan, Peter G
2016-03-01
Recent estimates suggest that roughly 100 times more plastic litter enters the sea than is found floating at the sea surface, despite the buoyancy and durability of many plastic polymers. Biofouling by marine biota is one possible mechanism responsible for this discrepancy. Microplastics (<5 mm in diameter) are more scarce than larger size classes, which makes sense because fouling is a function of surface area whereas buoyancy is a function of volume; the smaller an object, the greater its relative surface area. We tested whether plastic items with high surface area to volume ratios sank more rapidly by submerging 15 different sizes of polyethylene samples in False Bay, South Africa, for 12 weeks to determine the time required for samples to sink. All samples became sufficiently fouled to sink within the study period, but small samples lost buoyancy much faster than larger ones. There was a direct relationship between sample volume (buoyancy) and the time to attain a 50% probability of sinking, which ranged from 17 to 66 days of exposure. Our results provide the first estimates of the longevity of different sizes of plastic debris at the ocean surface. Further research is required to determine how fouling rates differ on free floating debris in different regions and in different types of marine environments. Such estimates could be used to improve model predictions of the distribution and abundance of floating plastic debris globally. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Relationship between Organizational Culture Types and Innovation in Aerospace Companies
NASA Astrophysics Data System (ADS)
Nelson, Adaora N.
Innovation in the aerospace industry has proven to be an effective strategy for competitiveness and sustainability. The organizational culture of the firm must be conducive to innovation. The problem was that although innovation is needed for aerospace companies to be competitive and sustainable, certain organizational culture issues might hinder leaders from successfully innovating (Emery, 2010; Ramanigopal, 2012). The purpose of this study was to assess the relationship of hierarchical, clan, adhocracy and market organizational types and innovation in aerospace companies within the U.S while controlling for company size and length of time in business. The non-experimental quantitative study included a random sample of 136 aerospace leaders in the U.S. There was a significant relationship between market organizational culture and innovation, F(1,132) = 4.559, p = .035. No significant relationships were found between hierarchical organizational culture and innovation and between clan culture and innovation. The relationship between adhocracy culture and innovation was not significant, possible due to inadequate sample size. Company size was shown to be a justifiable covariate in the study, due to a significant relationship with innovative (F(1, 130) = 4.66, p < .1, r = .19). Length of time in business had no relationship with innovation. The findings imply that market organizational cultures are more likely to result in innovative outcomes in the aerospace industry. Organizational leaders are encouraged to adopt a market culture and adopt smaller organizational structures. Recommendations for further research include investigating the relationship between adhocracy culture and innovation using an adequate sample size. Research is needed to determine other variables that predict innovation. This study should be repeated at periodic intervals and across other industrial sectors and countries.
Yeshaya, J; Shalgi, R; Shohat, M; Avivi, L
1999-01-01
X-chromosome inactivation and the size of the CGG repeat number are assumed to play a role in the clinical, physical, and behavioral phenotype of female carriers of a mutated FMR1 allele. In view of the tight relationship between replication timing and the expression of a given DNA sequence, we have examined the replication timing of FMR1 alleles on active and inactive X-chromosomes in cell samples (lymphocytes or amniocytes) of 25 females: 17 heterozygous for a mutated FMR1 allele with a trinucleotide repeat number varying from 58 to a few hundred, and eight homozygous for a wild-type allele. We have applied two-color fluorescence in situ hybridization (FISH) with FMR1 and X-chromosome alpha-satellite probes to interphase cells of the various genotypes: the alpha-satellite probe was used to distinguish between early replicating (active) and late replicating (inactive) X-chromosomes, and the FMR1 probe revealed the replication pattern of this locus. All samples, except one with a large trinucleotide expansion, showed an early replicating FMR1 allele on the active X-chromosome and a late replicating allele on the inactive X-chromosome. In samples of mutation carriers, both the early and the late alleles showed delayed replication compared with normal alleles, regardless of repeat size. We conclude therefore that: (1) the FMR1 locus is subjected to X-inactivation; (2) mutated FMR1 alleles, regardless of repeat size, replicate later than wild-type alleles on both the active and inactive X-chromosomes; and (3) the delaying effect of the trinucleotide expansion, even with a low repeat size, is superimposed on the delay in replication associated with X-inactivation.
Bacterial contamination of boar semen affects the litter size.
Maroto Martín, Luis O; Muñoz, Eduardo Cruz; De Cupere, Françoise; Van Driessche, Edilbert; Echemendia-Blanco, Dannele; Rodríguez, José M Machado; Beeckmans, Sonia
2010-07-01
One hundred and fifteen semen samples were collected from 115 different boars from two farms in Cuba. The boars belonged to five different breeds. Evaluation of the semen sample characteristics (volume, pH, colour, smell, motility of sperm cells) revealed that they meet international standards. The samples were also tested for the presence of agglutinated sperm cells and for bacterial contamination. Seventy five percent of the ejaculates were contaminated with at least one type of bacteria and E. coli was by far the major contaminant, being present in 79% of the contaminated semen samples (n=68). Other contaminating bacteria belonged to the genera Proteus (n=31), Serratia (n=31), Enterobacter (n=24), Klebsiella (n=12), Staphylococcus (n=10), Streptococcus (n=8) and Pseudomonas (n=7). Only in one sample anaerobic bacteria were detected. Pearson's analysis of the data revealed that there is a positive correlation between the presence of E. coli and sperm agglutination, and a negative correlation between sperm agglutination and litter size. One-way ANOVA and post hoc Tukey analysis of 378 litters showed that the litter size is significantly reduced when semen is used that is contaminated with spermagglutinating E. coli above a threshold value of 3.5x10(3)CFU/ml. Copyright 2010 Elsevier B.V. All rights reserved.
Francis, Jill J; Johnston, Marie; Robertson, Clare; Glidewell, Liz; Entwistle, Vikki; Eccles, Martin P; Grimshaw, Jeremy M
2010-12-01
In interview studies, sample size is often justified by interviewing participants until reaching 'data saturation'. However, there is no agreed method of establishing this. We propose principles for deciding saturation in theory-based interview studies (where conceptual categories are pre-established by existing theory). First, specify a minimum sample size for initial analysis (initial analysis sample). Second, specify how many more interviews will be conducted without new ideas emerging (stopping criterion). We demonstrate these principles in two studies, based on the theory of planned behaviour, designed to identify three belief categories (Behavioural, Normative and Control), using an initial analysis sample of 10 and stopping criterion of 3. Study 1 (retrospective analysis of existing data) identified 84 shared beliefs of 14 general medical practitioners about managing patients with sore throat without prescribing antibiotics. The criterion for saturation was achieved for Normative beliefs but not for other beliefs or studywise saturation. In Study 2 (prospective analysis), 17 relatives of people with Paget's disease of the bone reported 44 shared beliefs about taking genetic testing. Studywise data saturation was achieved at interview 17. We propose specification of these principles for reporting data saturation in theory-based interview studies. The principles may be adaptable for other types of studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorenz, Matthias; Ovchinnikova, Olga S; Van Berkel, Gary J
RATIONALE: Laser ablation provides for the possibility of sampling a large variety of surfaces with high spatial resolution. This type of sampling when employed in conjunction with liquid capture followed by nanoelectrospray ionization provides the opportunity for sensitive and prolonged interrogation of samples by mass spectrometry as well as the ability to analyze surfaces not amenable to direct liquid extraction. METHODS: A fully automated, reflection geometry, laser ablation liquid capture spot sampling system was achieved by incorporating appropriate laser fiber optics and a focusing lens into a commercially available, liquid extraction surface analysis (LESA ) ready Advion TriVersa NanoMate system.more » RESULTS: Under optimized conditions about 10% of laser ablated material could be captured in a droplet positioned vertically over the ablation region using the NanoMate robot controlled pipette. The sampling spot size area with this laser ablation liquid capture surface analysis (LA/LCSA) mode of operation (typically about 120 m x 160 m) was approximately 50 times smaller than that achievable by direct liquid extraction using LESA (ca. 1 mm diameter liquid extraction spot). The set-up was successfully applied for the analysis of ink on glass and paper as well as the endogenous components in Alstroemeria Yellow King flower petals. In a second mode of operation with a comparable sampling spot size, termed laser ablation/LESA , the laser system was used to drill through, penetrate, or otherwise expose material beneath a solvent resistant surface. Once drilled, LESA was effective in sampling soluble material exposed at that location on the surface. CONCLUSIONS: Incorporating the capability for different laser ablation liquid capture spot sampling modes of operation into a LESA ready Advion TriVersa NanoMate enhanced the spot sampling spatial resolution of this device and broadened the surface types amenable to analysis to include absorbent and solvent resistant materials.« less
Practical characteristics of adaptive design in phase 2 and 3 clinical trials.
Sato, A; Shimura, M; Gosho, M
2018-04-01
Adaptive design methods are expected to be ethical, reflect real medical practice, increase the likelihood of research and development success and reduce the allocation of patients into ineffective treatment groups by the early termination of clinical trials. However, the comprehensive details regarding which types of clinical trials will include adaptive designs remain unclear. We examined the practical characteristics of adaptive design used in clinical trials. We conducted a literature search of adaptive design clinical trials published from 2012 to 2015 using PubMed, EMBASE, and the Cochrane Central Register of Controlled Trials, with common search terms related to adaptive design. We systematically assessed the types and characteristics of adaptive designs and disease areas employed in the adaptive design trials. Our survey identified 245 adaptive design clinical trials. The number of trials by the publication year increased from 2012 to 2013 and did not greatly change afterwards. The most frequently used adaptive design was group sequential design (n = 222, 90.6%), especially for neoplasm or cardiovascular disease trials. Among the other types of adaptive design, adaptive dose/treatment group selection (n = 21, 8.6%) and adaptive sample-size adjustment (n = 19, 7.8%) were frequently used. The adaptive randomization (n = 8, 3.3%) and adaptive seamless design (n = 6, 2.4%) were less frequent. Adaptive dose/treatment group selection and adaptive sample-size adjustment were frequently used (up to 23%) in "certain infectious and parasitic diseases," "diseases of nervous system," and "mental and behavioural disorders" in comparison with "neoplasms" (<6.6%). For "mental and behavioural disorders," adaptive randomization was used in two trials of eight trials in total (25%). Group sequential design and adaptive sample-size adjustment were used frequently in phase 3 trials or in trials where study phase was not specified, whereas the other types of adaptive designs were used more in phase 2 trials. Approximately 82% (202 of 245 trials) resulted in early termination at the interim analysis. Among the 202 trials, 132 (54% of 245 trials) had fewer randomized patients than initially planned. This result supports the motive to use adaptive design to make study durations shorter and include a smaller number of subjects. We found that adaptive designs have been applied to clinical trials in various therapeutic areas and interventions. The applications were frequently reported in neoplasm or cardiovascular clinical trials. The adaptive dose/treatment group selection and sample-size adjustment are increasingly common, and these adaptations generally follow the Food and Drug Administration's (FDA's) recommendations. © 2017 John Wiley & Sons Ltd.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...
2016-05-25
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
2011-11-01
were evaluated. For these experiments, an aliquot of the common bacillus spore B. coagulans was drop-dried onto the SERS substrate active surface...the Klarite surface. Spectra for bacillus spore B. coagulans on different substrate types. 3.5 Energetic Sample Evaluation Hazard detection...substrate types (a–f). Notice the dramatic difference in size between the spore and the active areas on the Klarite surface. Spectra for bacillus
A basic introduction to statistics for the orthopaedic surgeon.
Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef
2012-02-01
Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.
Growth And Characterization Studies Of Advanced Infrared Heterostructures
2015-06-30
controlled within 50 arc-seconds for all the samples. The three samples were then processed into deep-etched mesa -type photodiodes, by using standard...contact ultraviolet lithography and wet-chemical etching. The circular mesa -size ranged from 25 to 400 µm in diameter. A 200-nm-thick SiNx film...coating was applied on top of the mesa . Devices were mounted on ceramic leadless chip carriers, and then mounted in the cryostat to characterize their
Daaboul, George G; Lopez, Carlos A; Chinnala, Jyothsna; Goldberg, Bennett B; Connor, John H; Ünlü, M Selim
2014-06-24
Rapid, sensitive, and direct label-free capture and characterization of nanoparticles from complex media such as blood or serum will broadly impact medicine and the life sciences. We demonstrate identification of virus particles in complex samples for replication-competent wild-type vesicular stomatitis virus (VSV), defective VSV, and Ebola- and Marburg-pseudotyped VSV with high sensitivity and specificity. Size discrimination of the imaged nanoparticles (virions) allows differentiation between modified viruses having different genome lengths and facilitates a reduction in the counting of nonspecifically bound particles to achieve a limit-of-detection (LOD) of 5 × 10(3) pfu/mL for the Ebola and Marburg VSV pseudotypes. We demonstrate the simultaneous detection of multiple viruses in a single sample (composed of serum or whole blood) for screening applications and uncompromised detection capabilities in samples contaminated with high levels of bacteria. By employing affinity-based capture, size discrimination, and a "digital" detection scheme to count single virus particles, we show that a robust and sensitive virus/nanoparticle sensing assay can be established for targets in complex samples. The nanoparticle microscopy system is termed the Single Particle Interferometric Reflectance Imaging Sensor (SP-IRIS) and is capable of high-throughput and rapid sizing of large numbers of biological nanoparticles on an antibody microarray for research and diagnostic applications.
A comparative review of methods for comparing means using partially paired data.
Guo, Beibei; Yuan, Ying
2017-06-01
In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.
Trial Sequential Analysis in systematic reviews with meta-analysis.
Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian
2017-03-06
Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
The change of family size and structure in China.
1992-04-01
With the socioeconomic development and change of people's values, there is some significant change in family size and structure in China. According to the 10% sample data from the 4th Census, 1 family has 3.97 persons on an average, less than the 3rd Census by 0.44 persons; among all types of families, 1-generation families account for 13.5%, 3 generation families for 18.5%, and 2-generation families account for 68%. Instead of large families consisting of several generations and many members, small families has now become a principal family type in China. According to the analysis of the sample data from the 4th Census, the family size is mainly decided by the fertility level in particular regions, and it also depends on the economic development. So family size is usually smaller in more developed regions, such as in Beijing, Tianjin, Zhejiang, Liaoning as well as in Shanghai of which family size is only 3.08 persons; and family size is generally larger in less developed regions such as in Qinghai, Guangxi, Gansu, Xinjiang, and in Tibet of which family size is as large as 5.13 persons. Specialists regard the increase of the number of families as 1 of the major consequences of the economic development, change of living style, and improvement of living standard, Young people now are more inclined to live separately from their parents. However, the increase of the number of families will undoubtedly place more pressure on housing and require more furniture and other durable consumer goods from the market. Therefore, the government and other social sectors related should make corresponding plans and policies to cope with the increase of families and minifying of family size so as to promote family planning and socioeconomic development, and to create better social circumstances for small families. full text
Response Variability in Commercial MOSFET SEE Qualification
George, J. S.; Clymer, D. A.; Turflinger, T. L.; ...
2016-12-01
Single-event effects (SEE) evaluation of five different part types of next generation, commercial trench MOSFETs indicates large part-to-part variation in determining a safe operating area (SOA) for drain-source voltage (V DS) following a test campaign that exposed >50 samples per part type to heavy ions. These results suggest a determination of a SOA using small sample sizes may fail to capture the full extent of the part-to-part variability. An example method is discussed for establishing a Safe Operating Area using a one-sided statistical tolerance limit based on the number of test samples. Finally, burn-in is shown to be a criticalmore » factor in reducing part-to-part variation in part response. Implications for radiation qualification requirements are also explored.« less
Response Variability in Commercial MOSFET SEE Qualification
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, J. S.; Clymer, D. A.; Turflinger, T. L.
Single-event effects (SEE) evaluation of five different part types of next generation, commercial trench MOSFETs indicates large part-to-part variation in determining a safe operating area (SOA) for drain-source voltage (V DS) following a test campaign that exposed >50 samples per part type to heavy ions. These results suggest a determination of a SOA using small sample sizes may fail to capture the full extent of the part-to-part variability. An example method is discussed for establishing a Safe Operating Area using a one-sided statistical tolerance limit based on the number of test samples. Finally, burn-in is shown to be a criticalmore » factor in reducing part-to-part variation in part response. Implications for radiation qualification requirements are also explored.« less
Development of Botanical Composition in Maribaya Pasture, Brebes, Central Java
NASA Astrophysics Data System (ADS)
Umami, N.; Ngadiyono, N.; Panjono; Agus, F. N.; Shirothul, H. M.; Budisatria, I. G. S.; Hendrawati, Y.; Subroto, I.
2018-02-01
The research was aimed to observe the development of botanical composition in Maribaya pastures. The sampling method was cluster random sampling. The observed variables were the type of forages and the botanical composition in the pasture. Botanical composition was measured by using Line Intercept method and the production was measured by the estimation of botany production for each square meter using its dry matter measurement. The botani sampling was performed using square with size of 1×1 m2. The observation was performed before the pasture made (at 2015) and after the pasture made (at 2017). Based on the research result, it was found that there was significant difference between the forage type in the pasture at 2015 and at 2017. It happens due to the adjustment for the Jabres cattle feed.
Visual search by chimpanzees (Pan): assessment of controlling relations.
Tomonaga, M
1995-01-01
Three experimentally sophisticated chimpanzees (Pan), Akira, Chloe, and Ai, were trained on visual search performance using a modified multiple-alternative matching-to-sample task in which a sample stimulus was followed by the search display containing one target identical to the sample and several uniform distractors (i.e., negative comparison stimuli were identical to each other). After they acquired this task, they were tested for transfer of visual search performance to trials in which the sample was not followed by the uniform search display (odd-item search). Akira showed positive transfer of visual search performance to odd-item search even when the display size (the number of stimulus items in the search display) was small, whereas Chloe and Ai showed a transfer only when the display size was large. Chloe and Ai used some nonrelational cues such as perceptual isolation of the target among uniform distractors (so-called pop-out). In addition to the odd-item search test, various types of probe trials were presented to clarify the controlling relations in multiple-alternative matching to sample. Akira showed a decrement of accuracy as a function of the display size when the search display was nonuniform (i.e., each "distractor" stimulus was not the same), whereas Chloe and Ai showed perfect performance. Furthermore, when the sample was identical to the uniform distractors in the search display, Chloe and Ai never selected an odd-item target, but Akira selected it when the display size was large. These results indicated that Akira's behavior was controlled mainly by relational cues of target-distractor oddity, whereas an identity relation between the sample and the target strongly controlled the performance of Chloe and Ai. PMID:7714449
1989-10-01
Coord: Elev.PROTECTIVE CSG He~qht I Material /Type El e~. ________Diameter Elie.hI____ Depth BGS ___________Weep Hole (YIN) HSe.ht __ GUARD POSTS (YIN...Tremied I Y/14) SCR EE N Type Dimeter - --Slot Size Ck Type SUMP (YIN) Interval OGS Lcngth - 90tt0m Cop (YIN) BACKFILL PLUG Material ...for transport to the laboratory. The remaining liners will then be extruded and the material used for lithologic description and other analyses. The
The effect of size, orientation and alloying on the deformation of AZ31 nanopillars
NASA Astrophysics Data System (ADS)
Aitken, Zachary H.; Fan, Haidong; El-Awady, Jaafar A.; Greer, Julia R.
2015-03-01
We conducted uniaxial compression of single crystalline Mg alloy, AZ31 (Al 3 wt% and Zn 1 wt%) nanopillars with diameters between 300 and 5000 nm with two distinct crystallographic orientations: (1) along the [0001] c-axis and (2) at an acute angle away from the c-axis, nominally oriented for basal slip. We observe single slip deformation for sub-micron samples nominally oriented for basal slip with the deformation commencing via a single set of parallel shear offsets. Samples compressed along the c-axis display an increase in yield strength compared to basal samples as well as significant hardening with the deformation being mostly homogeneous. We find that the "smaller is stronger" size effect in single crystals dominates any improvement in strength that may have arisen from solid solution strengthening. We employ 3D-discrete dislocation dynamics (DDD) to simulate compression along the [0001] and [ 11 2 bar 2 ] directions to elucidate the mechanisms of slip and evolution of dislocation microstructure. These simulations show qualitatively similar stress-strain signatures to the experimentally obtained stress-strain data. Simulations of compression parallel to the [ 11 2 bar 2 ] direction reveal the activation and motion of only -type dislocations and virtually no dislocation junction formation. Computations of compression along [0001] show the activation and motion of both
Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai
2017-01-01
Abstract Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3‐by‐1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration–recommended 3‐by‐1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3‐by‐1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90‐μg test dose and a 720‐μg reference dose (42% cost reduction). Combining a 180‐μg test dose and a 720‐μg reference dose produced an estimated 36% cost reduction. PMID:29281130
Hong, Mineui; Bang, Heejin; Van Vrancken, Michael; Kim, Seungtae; Lee, Jeeyun; Park, Se Hoon; Park, Joon Oh; Park, Young Suk; Lim, Ho Yeong; Kang, Won Ki; Sun, Jong-Mu; Lee, Se Hoon; Ahn, Myung-Ju; Park, Keunchil; Kim, Duk Hwan; Lee, Seunggwan; Park, Woongyang; Kim, Kyoung-Mee
2017-01-01
To generate accurate next-generation sequencing (NGS) data, the amount and quality of DNA extracted is critical. We analyzed 1564 tissue samples from patients with metastatic or recurrent solid tumor submitted for NGS according to their sample size, acquisition method, organ, and fixation to propose appropriate tissue requirements. Of the 1564 tissue samples, 481 (30.8%) consisted of fresh-frozen (FF) tissue, and 1,083 (69.2%) consisted of formalin-fixed paraffin-embedded (FFPE) tissue. We obtained successful NGS results in 95.9% of cases. Out of 481 FF biopsies, 262 tissue samples were from lung, and the mean fragment size was 2.4 mm. Compared to lung, GI tract tumor fragments showed a significantly lower DNA extraction failure rate (2.1 % versus 6.1%, p = 0.04). For FFPE biopsy samples, the size of biopsy tissue was similar regardless of tumor type with a mean of 0.8 × 0.3 cm, and the mean DNA yield per one unstained slide was 114 ng. We obtained highest amount of DNA from the colorectum (2353 ng) and the lowest amount from the hepatobiliary tract (760.3 ng) likely due to a relatively smaller biopsy size, extensive hemorrhage and necrosis, and lower tumor volume. On one unstained slide from FFPE operation specimens, the mean size of the specimen was 2.0 × 1.0 cm, and the mean DNA yield per one unstained slide was 1800 ng. In conclusions, we present our experiences on tissue requirements for appropriate NGS workflow: > 1 mm2 for FF biopsy, > 5 unstained slides for FFPE biopsy, and > 1 unstained slide for FFPE operation specimens for successful test results in 95.9% of cases. PMID:28477007
Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials
NASA Astrophysics Data System (ADS)
Niu, Qifei; Zhang, Chi
2018-03-01
There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-05-01
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
Experience of elder abuse among older Korean immigrants.
Chang, Miya
2016-01-01
Studies on the scope and nature of Asian American elder abuse conducted with older immigrants are extremely limited. The overall purpose of this study was to examine the extent and type of elder abuse among older Korean immigrants, and to investigate critical predictors of elder abuse in this population. The sample consisted of 200 older Korean immigrants aged 60 to 90 years who resided in Los Angeles County in 2008. One of the key findings indicated that 58.3% of respondents experienced one or more types of elder abuse. Logistic regression indicated that the victims' health status and educational level were statistically significant predictors of the likelihood of experiencing abuse. The present study, although limited in sample size, measures, sampling methods, and population representation, has contributed to this important area of knowledge. It is recommended that future studies conduct research on elder abuse with more representative national samples that can measure the extent of abuse and neglect more accurately.
Oshima, Minako; Deitiker, Philip; Hastings-Ison, Tandy; Aoki, K Roger; Graham, H Kerr; Atassi, M Zouhair
2017-05-15
We have conducted a 26-month-long comparative study involving young patients (2-6years old) with a clinical diagnosis of spastic equinus secondary to cerebral palsy who have been treated with BoNT/A (BOTOX®, Allergan) tri-annually or annually. Serum samples were obtained to determine the presence or absence of blocking antibodies (Abs) by a mouse protection assay (MPA) and levels of anti-BoNT/A Abs by radioimmune assay (RIA). HLA DQ alleles were typed using blood samples to determine the possible association of certain HLA type(s) with the disease or with the Ab status. Blocking Abs were detected in only two out of 18 serum samples of the tri-annual group, but none were found in 20 samples of the annual group. The MPA-positive serum samples gave in RIA significantly higher anti-BoNT/A Ab-binding levels than the MPA-negative samples. On the other hand, when two MPA-positive sample data were excluded, serum samples from tri-annual and annual groups showed similar anti-BoNT/A Ab levels. Linkage of the disorder with a particular HLA DQA1 and DQB1 allele types was not observed due to the small sample size. However, by combining results with other studies on BoNT/A-treated Caucasian patients with cervical dystonia (CD), we found that, among Caucasian patients treated with BoNT/A, DQA1*01:02 and DQB1*06:04 were higher in Ab-positive than in Ab-negative patients. The genetic linkage was on the threshold of corrected significance. Copyright © 2017. Published by Elsevier B.V.
Bioaerosols study in central Taiwan during summer season.
Wang, Chun-Chin; Fang, Guor-Cheng; Lee, LienYao
2007-04-01
Suspended particles, of which bioaerosols are one type, constitute one of the main reasons to cause severe air quality in Taiwan. Bioaerosols include allergens such as fungi, bacteria, actinomycetes, arthropods and protozoa, as well as microbial products such as mycotoxins, endotoxins and glucans. When allergens and microbial products are suspended in the air, local air quality will be influenced severely. In addition, when the particle size is small enough to pass through the respiratory tract entering the human body, the health of the local population is also threatened. Therefore, the purpose of this study attempted to understand the concentration and types of bacteria during summer period at four sampling sites in Taichung city, central Taiwan. The results indicated that total average bacterial concentration by using R2A medium incubated for 48 h were 7.3 x 10(2) and 1.2 x 10(3) cfu/m3 for Chung-Ming elementary sampling site during daytime and night-time period of summer season. In addition, total average bacterial concentration by using R2A medium incubated for 48 h were 2.2 x 10(3) and 2.5 x 10(3) cfu/m3 for Taichung refuse incineration plant sampling site during daytime and night-time period of summer season. As for Rice Field sampling site during daytime and night-time period of summer season, the results also reflected that the total average bacterial concentration by using R2A medium incubated for 48 h were 3.4 x 10(3) and 3.5 x 10(3) cfu/m3. Finally, total average bacterial concentration by using R2A medium incubated for 48 h were 1.6 x 10(3) and 1.9 x 10(3) cfu/m3 for Central Taiwan Science Park sampling site during daytime and night-time period of summer season. Moreover, the average bacterial concentration increased as the incubated time in a growth medium increased for particle sizes of 0.65-1.1, 1.1-2.1, 2.1-3.3, 3.3-4.7 and 4.7-7.0 microm. The total average bacterial concentration has no significant difference for day and night sampling period at any sampling site for the expression of bacterial concentration in term of order. The high average bacterial concentration was found in the particle size of 0.53-0.71 mm (average bioaerosol size was in the range of 2.1-4.7 microm) for each sampling site. Besides, there were exceeded 20 kinds of bacteria for each sampling site and the bacterial shape were rod, coccus and filamentous.
The effectiveness of increased apical enlargement in reducing intracanal bacteria.
Card, Steven J; Sigurdsson, Asgeir; Orstavik, Dag; Trope, Martin
2002-11-01
It has been suggested that the apical portion of a root canal is not adequately disinfected by typical instrumentation regimens. The purpose of this study was to determine whether instrumentation to sizes larger than typically used would more effectively remove culturable bacteria from the canal. Forty patients with clinical and radiographic evidence of apical periodontitis were recruited from the endodontic clinic. Mandibular cuspids (n = 2), bicuspids (n = 11), and molars (mesial roots) (n = 27) were selected for the study. Bacterial sampling was performed upon access and after each of two consecutive instrumentations. The first instrumentation utilized 1% NaOCI and 0.04 taper ProFile rotary files. The cuspid and bicuspid canals were instrumented to a #8 size and the molar canals to a #7 size. The second instrumentation utilized LightSpeed files and 1% NaOCl irrigation for further enlargement of the apical third. Typically, molars were instrumented to size 60 and cuspid/bicuspid canals to size 80. Our findings show that 100% of the cuspid/bicuspid canals and 81.5% of the molar canals were rendered bacteria-free after the first instrumentation sizes. The molar results improved to 89% after the second instrumentation. Of the (59.3%) molar mesial canals without a clinically detectable communication, 93% were rendered bacteria-free with the first instrumentation. Using a Wilcoxon rank sum test, statistically significant differences (p < 0.0001) were found between the initial sample and the samples after the first and second instrumentations. The differences between the samples that followed the two instrumentation regimens were not significant (p = 0.0617). It is concluded that simple root canal systems (without multiple canal communications) may be rendered bacteria-free when preparation of this type is utilized.
Preservice Teachers' Sense of Efficacy: Video vs. Face-to-Face Observations
ERIC Educational Resources Information Center
Chisenhall, Debra Ellen
2016-01-01
This study examined preservice elementary education students' sense of efficacy regarding student engagement, instructional strategies, and classroom management based on the type of observations they completed. A total sample size of 64 elementary education students enrolled in four sections of an introductory elementary education course and…
As part of the Desert Southwest Coarse Particulate Matter Study which characterized the composition of fine and coarse particulate matter in Pinal County, AZ, several source samples were collected from several different soil types to assist in source apportionment analysis of the...
Profiling Local Optima in K-Means Clustering: Developing a Diagnostic Technique
ERIC Educational Resources Information Center
Steinley, Douglas
2006-01-01
Using the cluster generation procedure proposed by D. Steinley and R. Henson (2005), the author investigated the performance of K-means clustering under the following scenarios: (a) different probabilities of cluster overlap; (b) different types of cluster overlap; (c) varying samples sizes, clusters, and dimensions; (d) different multivariate…
Catholic High Schools and Their Finances, 1980.
ERIC Educational Resources Information Center
Bredeweg, Frank H.
The information contained in this report was drawn from data provided by a national sample of 200 Catholic high schools. The schools were selected to reflect types (private, Catholic, diocesan, and parish schools), enrollment sizes, and geographic location. The report addresses these areas. First, information is provided to point out the financial…
Talari, Roya; Varshosaz, Jaleh; Mostafavi, Seyed Abolfazl; Nokhodchi, Ali
2009-01-01
The micronization using milling process to enhance dissolution rate is extremely inefficient due to a high energy input, and disruptions in the crystal lattice which can cause physical or chemical instability. Therefore, the aim of the present study is to use in situ micronization process through pH change method to produce micron-size gliclazide particles for fast dissolution hence better bioavailability. Gliclazide was recrystallized in presence of 12 different stabilizers and the effects of each stabilizer on micromeritic behaviors, morphology of microcrystals, dissolution rate and solid state of recrystallized drug particles were investigated. The results showed that recrystallized samples showed faster dissolution rate than untreated gliclazide particles and the fastest dissolution rate was observed for the samples recrystallized in presence of PEG 1500. Some of the recrystallized drug samples in presence of stabilizers dissolved 100% within the first 5 min showing at least 10 times greater dissolution rate than the dissolution rate of untreated gliclazide powders. Micromeritic studies showed that in situ micronization technique via pH change method is able to produce smaller particle size with a high surface area. The results also showed that the type of stabilizer had significant impact on morphology of recrystallized drug particles. The untreated gliclazide is rod or rectangular shape, whereas the crystals produced in presence of stabilizers, depending on the type of stabilizer, were very fine particles with irregular, cubic, rectangular, granular and spherical/modular shape. The results showed that crystallization of gliclazide in presence of stabilizers reduced the crystallinity of the samples as confirmed by XRPD and DSC results. In situ micronization of gliclazide through pH change method can successfully be used to produce micron-sized drug particles to enhance dissolution rate.
Zhang, Ming; Wang, Ai-Juan; Li, Jun-Ming; Song, Na
2017-10-01
Stearic acid (Sa) was used to modify the surface properties of hydroxyapatite (HAp) in different solvents (water, ethanol or dichloromethane(CH 2 Cl 2 )). Effect of different solvents on the properties of HAp particles (activation ratio, grafting ratio, chemical properties), emulsion properties (emulsion stability, emulsion type, droplet morphology) as well as the cured materials (morphology, average pore size) were studied. FT-IR and XPS results confirmed the interaction occurred between stearic acid and HAp particles. Stable O/W and W/O type Pickering emulsions were prepared using unmodified and Sa modified HAp nanoparticles respectively, which indicated a catastrophic inversion of the Pickering emulsion happened possibly because of the enhanced hydrophobicity of HAp particles after surface modification. Porous materials with different structures and pore sizes were obtained using Pickering emulsion as the template via in situ evaporation solvent method. The results indicated the microstructures of cured samples are different form each other when HAp was surface modified in different solvents. HAp particles fabricated using ethanol as solvent has higher activation ratio and grafting ratio. Pickering emulsion with higher stability and cured porous materials with uniform morphology were obtained compared with samples prepared using water and CH 2 Cl 2 as solvents. In conclusion, surface modification of HAp in different solvents played a very important role for its stabilized Pickering emulsion as well as the microstructure of cured samples. It is better to use ethanol as the solvent for Sa modified HAp particles, which could increase the stability of Pickering emulsion and obtain cured samples with uniform pore size. Copyright © 2017 Elsevier B.V. All rights reserved.
Koreňová, Janka; Rešková, Zuzana; Véghová, Adriana; Kuchta, Tomáš
2015-01-01
Contamination by Staphylococcus aureus of the production environment of three small or medium-sized food-processing factories in Slovakia was investigated on the basis of sub-species molecular identification by multiple locus variable number of tandem repeats analysis (MLVA). On the basis of MLVA profiling, bacterial isolates were assigned to 31 groups. Data from repeated samplings over a period of 3 years facilitated to draw spatial and temporal maps of the contamination routes for individual factories, as well as identification of potential persistent strains. Information obtained by MLVA typing allowed to identify sources and routes of contamination and, subsequently, will allow to optimize the technical and sanitation measures to ensure hygiene.
Genuine non-self-averaging and ultraslow convergence in gelation.
Cho, Y S; Mazza, M G; Kahng, B; Nagler, J
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
Gallegos, Autumn M.; Crean, Hugh F.; Pigeon, Wilfred R.; Heffner, Kathi L.
2018-01-01
Posttraumatic stress disorder (PTSD) is a chronic and debilitating disorder that affects the lives of 7-8% of adults in the U.S. Although several interventions demonstrate clinical effectiveness for treating PTSD, many patients continue to have residual symptoms and ask for a variety of treatment options. Complementary health approaches, such as meditation and yoga, hold promise for treating symptoms of PTSD. This meta-analysis evaluates the effect size (ES) of yoga and meditation on PTSD outcomes in adult patients. We also examined whether the intervention type, PTSD outcome measure, study population, sample size, or control condition moderated the effects of complementary approaches on PTSD outcomes. The studies included were 19 randomized control trials with data on 1,173 participants. A random effects model yielded a statistically significant ES in the small to medium range (ES = −.39, p < .001, 95% CI [−.57, −.22]). There were no appreciable differences between intervention types, study population, outcome measures, or control condition. There was, however, a marginally significant higher ES for sample size ≤ 30 (ES = −.78, k = 5). These findings suggest that meditation and yoga are promising complementary approaches in the treatment of PTSD among adults and warrant further study. PMID:29100863
Predicting permeability with NMR imaging in the Edwards Limestone/Stuart City Trend
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewitt, H.; Globe, M.; Sorenson, R.
1996-09-01
Determining pore size and pore geometry relationships in carbonate rocks and relating both to permeability is difficult using traditional logging methods. This problem is further complicated by the presence of abundant microporosity (pore size less than 62 microns) in the Edwards Limestone. The use of Nuclear Magnetic Resonance Imaging (NMR) allows for an alternative approach to evaluating the pore types present by examining the response of hydrogen nuclei contained within the free fluid pore space. By testing the hypothesis that larger pore types exhibit an NMR signal decay much slower than smaller pore types, an estimate of the pore typemore » present, (i.e.) vuggy, interparticle, or micropores, can be inferred. Calibration of the NMR decay curve to known samples with measured petrophysical properties allows for improved predictability of pore types and permeability. The next stage of the analysis involves the application of the calibration technique to the borehole environment using an NMR logging tool to more accurately predict production performance.« less
Effect of ambient humidity on the rate at which blood spots dry and the size of the spot produced.
Denniff, Philip; Woodford, Lynsey; Spooner, Neil
2013-08-01
For shipping and storage, dried blood spot (DBS) samples must be sufficiently dry to protect the integrity of the sample. When the blood is spotted the humidity has the potential to affect the size of the spot created and the speed at which it dries. The area of DBS produced on three types of substrates were not affected by the humidity under which they were generated. DBS samples reached a steady moisture content 150 min after spotting and 90 min for humidities less than 60% relative humidity. All packaging materials examined provided some degree of protection from external extreme conditions. However, none of the packaging examined provided a total moisture barrier to extreme environmental conditions. Humidity was shown not to affect the spot area and DBS samples were ready for shipping and storage 2 h after spotting. The packing solutions examined all provided good protection from external high humidity conditions.
Development of shrinkage resistant microfibre-reinforced cement-based composites
NASA Astrophysics Data System (ADS)
Hamedanimojarrad, P.; Adam, G.; Ray, A. S.; Thomas, P. S.; Vessalas, K.
2012-06-01
Different shrinkage types may cause serious durability dilemma on restrained concrete parts due to crack formation and propagation. Several classes of fibres are used by concrete industry in order to reduce crack size and crack number. In previous studies, most of these fibre types were found to be effective in reducing the number and sizes of the cracks, but not in shrinkage strain reduction. This study deals with the influence of a newly introduced type of polyethylene fibre on drying shrinkage reduction. The novel fibre is a polyethylene microfibre in a new geometry, which is proved to reduce the amount of total shrinkage in mortars. This special hydrophobic polyethylene microfibre also reduces moisture loss of mortar samples. The experimental results on short and long-term drying shrinkage as well as on several other properties are reported. The hydrophobic polyethylene microfibre showed promising improvement in shrinkage reduction even at very low concentrations (0.1% of cement weight).
High throughput MLVA-16 typing for Brucella based on the microfluidics technology
2011-01-01
Background Brucellosis, a zoonosis caused by the genus Brucella, has been eradicated in Northern Europe, Australia, the USA and Canada, but remains endemic in most areas of the world. The strain and biovar typing of Brucella field samples isolated in outbreaks is useful for tracing back source of infection and may be crucial for discriminating naturally occurring outbreaks versus bioterrorist events, being Brucella a potential biological warfare agent. In the last years MLVA-16 has been described for Brucella spp. genotyping. The MLVA band profiles may be resolved by different techniques i.e. the manual agarose gels, the capillary electrophoresis sequencing systems or the microfluidic Lab-on-Chip electrophoresis. In this paper we described a high throughput system of MLVA-16 typing for Brucella spp. by using of the microfluidics technology. Results The Caliper LabChip 90 equipment was evaluated for MLVA-16 typing of sixty-three Brucella samples. Furthermore, in order to validate the system, DNA samples previously resolved by sequencing system and Agilent technology, were de novo genotyped. The comparison of the MLVA typing data obtained by the Caliper equipment and those previously obtained by the other analysis methods showed a good correlation. However the outputs were not accurate as the Caliper DNA fragment sizes showed discrepancies compared with real data and a conversion table from observed to expected data was created. Conclusion In this paper we described the MLVA-16 using a rapid, sophisticated microfluidics technology for detection of amplification product sizes. The comparison of the MLVA typing data produced by Caliper LabChip 90 system with the data obtained by different techniques showed a general concordance of the results. Furthermore this platform represents a significant improvement in terms of handling, data acquiring, computational efficiency and rapidity, allowing to perform the strain genotyping in a time equal to one sixth respect to other microfluidics systems as e.g. the Agilent 2100 bioanalyzer. Finally, this platform can be considered a valid alternative to standard genotyping techniques, particularly useful dealing with a large number of samples in short time. These data confirmed that this technology represents a significative advancement in high-throughput accurate Brucella genotyping. PMID:21435217
Is it appropriate to composite fish samples for mercury trend monitoring and consumption advisories?
Gandhi, Nilima; Bhavsar, Satyendra P; Gewurtz, Sarah B; Drouillard, Ken G; Arhonditsis, George B; Petro, Steve
2016-03-01
Monitoring mercury levels in fish can be costly because variation by space, time, and fish type/size needs to be captured. Here, we explored if compositing fish samples to decrease analytical costs would reduce the effectiveness of the monitoring objectives. Six compositing methods were evaluated by applying them to an existing extensive dataset, and examining their performance in reproducing the fish consumption advisories and temporal trends. The methods resulted in varying amount (average 34-72%) of reductions in samples, but all (except one) reproduced advisories very well (96-97% of the advisories did not change or were one category more restrictive compared to analysis of individual samples). Similarly, the methods performed reasonably well in recreating temporal trends, especially when longer-term and frequent measurements were considered. The results indicate that compositing samples within 5cm fish size bins or retaining the largest/smallest individuals and compositing in-between samples in batches of 5 with decreasing fish size would be the best approaches. Based on the literature, the findings from this study are applicable to fillet, muscle plug and whole fish mercury monitoring studies. The compositing methods may also be suitable for monitoring Persistent Organic Pollutants (POPs) in fish. Overall, compositing fish samples for mercury monitoring could result in a substantial savings (approximately 60% of the analytical cost) and should be considered in fish mercury monitoring, especially in long-term programs or when study cost is a concern. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bolling, Denzell Tamarcus
A significant amount of research has been devoted to the characterization of new engineering materials. Searching for new alloys which may improve weight, ultimate strength, or fatigue life are just a few of the reasons why researchers study different materials. In support of that mission this study focuses on the effects of specimen geometry and size on the dynamic failure of AA2219 aluminum alloy subjected to impact loading. Using the Split Hopkinson Pressure Bar (SHPB) system different geometric samples including cubic, rectangular, cylindrical, and frustum samples are loaded at different strain rates ranging from 1000s-1 to 6000s-1. The deformation properties, including the potential for the formation of adiabatic shear bands, of the different geometries are compared. Overall the cubic geometry achieves the highest critical strain and the maximum stress values at low strain rates and the rectangular geometry has the highest critical strain and the maximum stress at high strain rates. The frustum geometry type consistently achieves the lowest the maximum stress value compared to the other geometries under equal strain rates. All sample types clearly indicated susceptibility to strain localization at different locations within the sample geometry. Micrograph analysis indicated that adiabatic shear band geometry was influenced by sample geometry, and that specimens with a circular cross section are more susceptible to shear band formation than specimens with a rectangular cross section.
Røren Nordén, Kristine; Dagfinrud, Hanne; Løvstad, Amund; Raastad, Truls
Introduction . The purpose of this study was to investigate body composition, muscle function, and muscle morphology in patients with spondyloarthritis (SpA). Methods . Ten male SpA patients (mean ± SD age 39 ± 4.1 years) were compared with ten healthy controls matched for sex, age, body mass index, and self-reported level of physical exercise. Body composition was measured by dual energy X-ray absorptiometry. Musculus quadriceps femoris (QF) strength was assessed by maximal isometric contractions prior to test of muscular endurance. Magnetic resonance imaging of QF was used to measure muscle size and calculate specific muscle strength. Percutaneous needle biopsy samples were taken from m. vastus lateralis . Results . SpA patients presented with significantly lower appendicular lean body mass (LBM) ( p = 0.02), but there was no difference in bone mineral density, fat mass, or total LBM. Absolute QF strength was significantly lower in SpA patients ( p = 0.03) with a parallel trend for specific strength ( p = 0.08). Biopsy samples from the SpA patients revealed significantly smaller cross-sectional area (CSA) of type II muscle fibers ( p = 0.04), but no difference in CSA type I fibers. Conclusions . Results indicate that the presence of SpA disease is associated with reduced appendicular LBM, muscle strength, and type II fiber CSA.
van der Gaag, Kristiaan J; de Leeuw, Rick H; Laros, Jeroen F J; den Dunnen, Johan T; de Knijff, Peter
2018-07-01
Since two decades, short tandem repeats (STRs) are the preferred markers for human identification, routinely analysed by fragment length analysis. Here we present a novel set of short hypervariable autosomal microhaplotypes (MH) that have four or more SNPs in a span of less than 70 nucleotides (nt). These MHs display a discriminating power approaching that of STRs and provide a powerful alternative for the analysis;1;is of forensic samples that are problematic when the STR fragment size range exceeds the integrity range of severely degraded DNA or when multiple donors contribute to an evidentiary stain and STR stutter artefacts complicate profile interpretation. MH typing was developed using the power of massively parallel sequencing (MPS) enabling new powerful, fast and efficient SNP-based approaches. MH candidates were obtained from queries in data of the 1000 Genomes, and Genome of the Netherlands (GoNL) projects. Wet-lab analysis of 276 globally dispersed samples and 97 samples of nine large CEPH families assisted locus selection and corroboration of informative value. We infer that MHs represent an alternative marker type with good discriminating power per locus (allowing the use of a limited number of loci), small amplicon sizes and absence of stutter artefacts that can be especially helpful when unbalanced mixed samples are submitted for human identification. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Silica dust exposure: Effect of filter size to compliance determination
NASA Astrophysics Data System (ADS)
Amran, Suhaily; Latif, Mohd Talib; Khan, Md Firoz; Leman, Abdul Mutalib; Goh, Eric; Jaafar, Shoffian Amin
2016-11-01
Monitoring of respirable dust was performed using a set of integrated sampling system consisting of sampling pump attached with filter media and separating device such as cyclone or special cassette. Based on selected method, filter sizes are either 25 mm or 37 mm poly vinyl chloride (PVC) filter. The aim of this study was to compare performance of two types of filter during personal respirable dust sampling for silica dust under field condition. The comparison strategy focused on the final compliance judgment based on both dataset. Eight hour parallel sampling of personal respirable dust exposure was performed among 30 crusher operators at six quarries. Each crusher operator was attached with parallel set of integrated sampling train containing either 25 mm or 37 mm PVC filter. Each set consisted of standard flow SKC sampler, attached with SKC GS3 cyclone and 2 pieces cassette loaded with 5.0 µm of PVC filter. Samples were analyzed by gravimetric technique. Personal respirable dust exposure between the two types of filters indicated significant positive correlation (p < 0.05) with moderate relationship (r2 = 0.6431). Personal exposure based on 25 mm PVC filter indicated 0.1% non-compliance to overall data while 37 mm PVC filter indicated similar findings at 0.4 %. Both data showed similar arithmetic mean(AM) and geometric mean(GM). In overall we concluded that personal respirable dust exposure either based on 25mm or 37mm PVC filter will give similar compliance determination. Both filters are reliable to be used in respirable dust monitoring for silica dust related exposure.
Sawyer, Rachel Mary; Fenosoa, Zo Samuel Ella; Andrianarimisa, Aristide; Donati, Giuseppe
2017-01-01
Madagascar is one of the world's biodiversity hotspots. The island's past and current rates of deforestation and habitat disturbance threaten its plethora of endemic biodiversity. On Madagascar, tavy (slash and burn agriculture), land conversion for rice cultivation, illegal hardwood logging and bushmeat hunting are the major contributors to habitat disturbance. Understanding species-specific responses to habitat disturbance across different habitat types is crucial when designing conservation strategies. We surveyed three nocturnal lemur species in four forest types of varying habitat disturbance on the Masoala Peninsula, northeastern Madagascar. We present here updated abundance and density estimates for the Endangered Avahi mooreorum and Lepilemur scottorum, and Microcebus sp. Distance sampling surveys were conducted on 11 transects, covering a total of 33 km after repeated transect walks. We collected data on tree height, bole height, diameter at breast height, canopy cover and tree density using point-quarter sampling to characterise the four forest types (primary lowland, primary littoral, selectively logged and agricultural mosaic). Median encounter rates by forest type ranged from 1 to 1.5 individuals (ind.)/km (Microcebus sp.), 0-1 ind./km (A. mooreorum) and 0-1 ind./km (L. scottorum). Species density estimates were calculated at 232.31 ind./km 2 (Microcebus sp.) and 121.21 ind./km 2 (A. mooreorum), while no density estimate is provided for L. scottorum due to a small sample size. Microcebus sp. was most tolerant to habitat disturbance, exhibiting no significant effect of forest type on abundance. Its small body size, omnivorous diet and generalised locomotion appear to allow it to tolerate a variety of habitat disturbance. Both A. mooreorum and L. scottorum showed significant effects of forest type on their respective abundance. This study suggests that the specialist locomotion and diet of A. mooreorum and L. scottorum make them susceptible to the effects of increasing habitat disturbance.
Small-sized microplastics and pigmented particles in bottled mineral water.
Oßmann, Barbara E; Sarau, George; Holtmannspötter, Heinrich; Pischetsrieder, Monika; Christiansen, Silke H; Dicke, Wilhelm
2018-09-15
Up to now, only a few studies about microparticle contamination of bottled mineral water have been published. The smallest analysed particle size was 5 μm. However, due to toxicological reasons, especially microparticles smaller than 1.5 μm are critically discussed. Therefore, in the present study, 32 samples of bottled mineral water were investigated for contamination by microplastics, pigment and additive particles. Due to the application of aluminium coated polycarbonate membrane filters and micro-Raman spectroscopy, a lowest analysed particle size of 1 μm was achieved. Microplastics were found in water from all bottle types: in single use and reusable bottles made of poly(ethylene terephthalate) (PET) as well as in glass bottles. The amount of microplastics in mineral water varied from 2649 ± 2857 per litre in single use PET bottles up to 6292 ± 10521 per litre in glass bottles. While in plastic bottles, the predominant polymer type was PET; in glass bottles various polymers such as polyethylene or styrene-butadiene-copolymer were found. Hence, besides the packaging itself, other contamination sources have to be considered. Pigment particles were detected in high amounts in reusable, paper labelled bottles (195047 ± 330810 pigment particles per litre in glass and 23594 ± 25518 pigment particles per litre in reusable paper labelled PET bottles). Pigment types found in water samples were the same as used for label printing, indicating the bottle cleaning process as possible contamination route. Furthermore, on average 708 ± 1024 particles per litre of the additive Tris(2,4-di-tert-butylphenyl)phosphite were found in reusable PET bottles. This additive might be leached out from the bottle material itself. Over 90% of the detected microplastics and pigment particles were smaller than 5 μm and thus not covered by previous studies. In summary, this is the first study reporting about microplastics, pigment and additive particles found in bottled mineral water samples with a smallest analysed particle size of 1 μm. Copyright © 2018 Elsevier Ltd. All rights reserved.
Class III dento-skeletal anomalies: rotational growth and treatment timing.
Mosca, G; Grippaudo, C; Marchionni, P; Deli, R
2006-03-01
The interception of a Class III malocclusion requires a long-term growth prediction in order to estimate the subject's evolution from the prepubertal phase to adulthood. The aim of this retrospective longitudinal study was to highlight the differences in facial morphology in relation to the direction of mandibular growth in a sample of subjects with Class III skeletal anomalies divided on the basis of their Petrovic's auxological categories and rotational types. The study involved 20 patients (11 females and 9 males) who started therapy before reaching their pubertal peak and were followed up for a mean of 4.3 years (range: 3.9-5.5 years). Despite the small sample size, the definition of the rotational type of growth was the main diagnostic element for setting the correct individualised therapy. We therefore believe that the observation of a larger sample would reinforce the diagnostic-therapeutic validity of Petrovic's auxological categories, allow an evaluation off all rotational types, and improve the statistical significance of the results obtained.
Fischer, Jesse R.; Quist, Michael C.
2014-01-01
All freshwater fish sampling methods are biased toward particular species, sizes, and sexes and are further influenced by season, habitat, and fish behavior changes over time. However, little is known about gear-specific biases for many common fish species because few multiple-gear comparison studies exist that have incorporated seasonal dynamics. We sampled six lakes and impoundments representing a diversity of trophic and physical conditions in Iowa, USA, using multiple gear types (i.e., standard modified fyke net, mini-modified fyke net, sinking experimental gill net, bag seine, benthic trawl, boat-mounted electrofisher used diurnally and nocturnally) to determine the influence of sampling methodology and season on fisheries assessments. Specifically, we describe the influence of season on catch per unit effort, proportional size distribution, and the number of samples required to obtain 125 stock-length individuals for 12 species of recreational and ecological importance. Mean catch per unit effort generally peaked in the spring and fall as a result of increased sampling effectiveness in shallow areas and seasonal changes in habitat use (e.g., movement offshore during summer). Mean proportional size distribution decreased from spring to fall for white bass Morone chrysops, largemouth bass Micropterus salmoides, bluegill Lepomis macrochirus, and black crappie Pomoxis nigromaculatus, suggesting selectivity for large and presumably sexually mature individuals in the spring and summer. Overall, the mean number of samples required to sample 125 stock-length individuals was minimized in the fall with sinking experimental gill nets, a boat-mounted electrofisher used at night, and standard modified nets for 11 of the 12 species evaluated. Our results provide fisheries scientists with relative comparisons between several recommended standard sampling methods and illustrate the effects of seasonal variation on estimates of population indices that will be critical to the future development of standardized sampling methods for freshwater fish in lentic ecosystems.
Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi
2011-04-01
Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.
Chen, Hua-xing; Tang, Hong-ming; Duan, Ming; Liu, Yi-gang; Liu, Min; Zhao, Feng
2015-01-01
In this study, the effects of gravitational settling time, temperature, speed and time of centrifugation, flocculant type and dosage, bubble size and gas amount were investigated. The results show that the simple increase in settling time and temperature is of no use for oil-water separation of the three wastewater samples. As far as oil-water separation efficiency is concerned, increasing centrifugal speed and centrifugal time is highly effective for L sample, and has a certain effect on J sample, but is not valid for S sample. The flocculants are highly effective for S and L samples, and the oil-water separation efficiency increases with an increase in the concentration of inorganic cationic flocculants. There exist critical reagent concentrations for the organic cationic and the nonionic flocculants, wherein a higher or lower concentration of flocculant would cause a decrease in the treatment efficiency. Flotation is an effective approach for oil-water separation of polymer-contained wastewater from the three oilfields. The oil-water separation efficiency can be enhanced by increasing floatation agent concentration, flotation time and gas amount, and by decreasing bubble size.
Age-related differences in muscle fatigue vary by contraction type: a meta-analysis.
Avin, Keith G; Law, Laura A Frey
2011-08-01
During senescence, despite the loss of strength (force-generating capability) associated with sarcopenia, muscle endurance may improve for isometric contractions. The purpose of this study was to perform a systematic meta-analysis of young versus older adults, considering likely moderators (ie, contraction type, joint, sex, activity level, and task intensity). A 2-stage systematic review identified potential studies from PubMed, CINAHL, PEDro, EBSCOhost: ERIC, EBSCOhost: Sportdiscus, and The Cochrane Library. Studies reporting fatigue tasks (voluntary activation) performed at a relative intensity in both young (18-45 years of age) and old (≥ 55 years of age) adults who were healthy were considered. Sample size, mean and variance outcome data (ie, fatigue index or endurance time), joint, contraction type, task intensity (percentage of maximum), sex, and activity levels were extracted. Effect sizes were (1) computed for all data points; (2) subgrouped by contraction type, sex, joint or muscle group, intensity, or activity level; and (3) further subgrouped between contraction type and the remaining moderators. Out of 3,457 potential studies, 46 publications (with 78 distinct effect size data points) met all inclusion criteria. A lack of available data limited subgroup analyses (ie, sex, intensity, joint), as did a disproportionate spread of data (most intensities ≥ 50% of maximum voluntary contraction). Overall, older adults were able to sustain relative-intensity tasks significantly longer or with less force decay than younger adults (effect size=0.49). However, this age-related difference was present only for sustained and intermittent isometric contractions, whereas this age-related advantage was lost for dynamic tasks. When controlling for contraction type, the additional modifiers played minor roles. Identifying muscle endurance capabilities in the older adult may provide an avenue to improve functional capabilities, despite a clearly established decrement in peak torque.
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses with LAI and clip harvest data to determine whether LAI can be used as a suitable proxy for aboveground standing biomass. We also compared optimal sample sizes derived from LAI data, and clip-harvest data from two different size clip harvest areas (0.1m by 1m vs. 0.1m by 2m). Sample sizes were calculated in order to estimate the mean to within a standardized level of uncertainty that will be used to guide sampling effort across all vegetation types (i.e. estimated within × 10% with 95% confidence). Finally, we employed a Semivariogram approach to determine optimal sample size and spacing.
Selecting the optimum plot size for a California design-based stream and wetland mapping program.
Lackey, Leila G; Stein, Eric D
2014-04-01
Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Compensating vacancy defects in Sn- and Mg-doped In2O3
NASA Astrophysics Data System (ADS)
Korhonen, E.; Tuomisto, F.; Bierwagen, O.; Speck, J. S.; Galazka, Z.
2014-12-01
MBE-grown Sn- and Mg-doped epitaxial In2O3 thin-film samples with varying doping concentrations have been measured using positron Doppler spectroscopy and compared to a bulk crystal reference. Samples were subjected to oxygen or vacuum annealing and the effect on vacancy type defects was studied. Results indicate that after oxygen annealing the samples are dominated by cation vacancies, the concentration of which changes with the amount of doping. In highly Sn-doped In2O3 , however, these vacancies are not the main compensating acceptor. Vacuum annealing increases the size of vacancies in all samples, possibly by clustering them with oxygen vacancies.
Perceived racism and mental health among Black American adults: a meta-analytic review.
Pieterse, Alex L; Todd, Nathan R; Neville, Helen A; Carter, Robert T
2012-01-01
The literature indicates that perceived racism tends to be associated with adverse psychological and physiological outcomes; however, findings in this area are not yet conclusive. In this meta-analysis, we systematically reviewed 66 studies (total sample size of 18,140 across studies), published between January 1996 and April 2011, on the associations between racism and mental health among Black Americans. Using a random-effects model, we found a positive association between perceived racism and psychological distress (r = .20). We found a moderation effect for psychological outcomes, with anxiety, depression, and other psychiatric symptoms having a significantly stronger association than quality of life indicators. We did not detect moderation effects for type of racism scale, measurement precision, sample type, or type of publication. Implications for research and practice are discussed. (c) 2012 APA, all rights reserved).
Gan, Wei; Walters, Robin G; Holmes, Michael V; Bragg, Fiona; Millwood, Iona Y; Banasik, Karina; Chen, Yiping; Du, Huaidong; Iona, Andri; Mahajan, Anubha; Yang, Ling; Bian, Zheng; Guo, Yu; Clarke, Robert J; Li, Liming; McCarthy, Mark I; Chen, Zhengming
2016-07-01
Genome-wide association studies (GWAS) have discovered many risk variants for type 2 diabetes. However, estimates of the contributions of risk variants to type 2 diabetes predisposition are often based on highly selected case-control samples, and reliable estimates of population-level effect sizes are missing, especially in non-European populations. The individual and cumulative effects of 59 established type 2 diabetes risk loci were measured in a population-based China Kadoorie Biobank (CKB) study of 93,000 Chinese adults, including >7,100 diabetes cases. Association signals were directionally consistent between CKB and the original discovery GWAS: of 56 variants passing quality control, 48 showed the same direction of effect (binomial test, p = 2.3 × 10(-8)). We observed a consistent overall trend towards lower risk variant effect sizes in CKB than in case-control samples of GWAS meta-analyses (mean 19-22% decrease in log odds, p ≤ 0.0048), likely to reflect correction of both 'winner's curse' and spectrum bias effects. The association with risk of diabetes of a genetic risk score, based on lead variants at 25 loci considered to act through beta cell function, demonstrated significant interactions with several measures of adiposity (BMI, waist circumference [WC], WHR and percentage body fat [PBF]; all p interaction < 1 × 10(-4)), with a greater effect being observed in leaner adults. Our study provides further evidence of shared genetic architecture for type 2 diabetes between Europeans and East Asians. It also indicates that even very large GWAS meta-analyses may be vulnerable to substantial inflation of effect size estimates, compared with those observed in large-scale population-based cohort studies. Details of how to access China Kadoorie Biobank data and details of the data release schedule are available from www.ckbiobank.org/site/Data+Access .
Rakhshan, Hamid
2016-01-01
Summary Background and purpose: Dental aplasia (or hypodontia) is a frequent and challenging anomaly and thus of interest to many dental fields. Although the number of missing teeth (NMT) in each person is a major clinical determinant of treatment need, there is no meta-analysis on this subject. Therefore, we aimed to investigate the relevant literature, including epidemiological studies and research on dental/orthodontic patients. Methods: Among 50 reports, the effects of ethnicities, regions, sample sizes/types, subjects’ minimum ages, journals’ scientific credit, publication year, and gender composition of samples on the number of missing permanent teeth (except the third molars) per person were statistically analysed (α = 0.05, 0.025, 0.01). Limitations: The inclusion of small studies and second-hand information might reduce the reliability. Nevertheless, these strategies increased the meta-sample size and favoured the generalisability. Moreover, data weighting was carried out to account for the effect of study sizes/precisions. Results: The NMT per affected person was 1.675 [95% confidence interval (CI) = 1.621–1.728], 1.987 (95% CI = 1.949–2.024), and 1.893 (95% CI = 1.864–1.923), in randomly selected subjects, dental/orthodontic patients, and both groups combined, respectively. The effects of ethnicities (P > 0.9), continents (P > 0.3), and time (adjusting for the population type, P = 0.7) were not significant. Dental/orthodontic patients exhibited a significantly greater NMT compared to randomly selected subjects (P < 0.012). Larger samples (P = 0.000) and enrolling younger individuals (P = 0.000) might inflate the observed NMT per person. Conclusions: Time, ethnic backgrounds, and continents seem unlikely influencing factors. Subjects younger than 13 years should be excluded. Larger samples should be investigated by more observers. PMID:25840586
Monk, Timothy H; Buysse, Daniel J; Billy, Bart D; Fletcher, Mary E; Kennedy, Kathy S; Schlarb, Janet E; Beach, Scott R
2011-02-01
Using telephone interview data from retired seniors to explore how inter-individual differences in circadian type (morningness) and bed-timing regularity might be related to subjective sleep quality and quantity. MANCOVA with binary measures of morningness, stability of bedtimes, and stability of rise-times as independent variables; sleep measures as dependent variables; age, former shift work, and gender as covariates. Telephone interviews using a pseudo-random age-targeted sampling process. 654 retired seniors (65 y+, 363M, 291F). none. (1) circadian type (from Composite Scale of Morningness [CSM]), and stability of (2) bedtime and (3) rise-time from the Sleep Timing Questionnaire (STQ). Pittsburgh Sleep Quality Index (PSQI) score, time in bed, time spent asleep, and sleep efficiency, from Sleep Timing Questionnaire (STQ). Morning-type orientation, stability in bedtimes, and stability in rise-times were all associated with better sleep quality (P < 0.001, for all; effect sizes: 0.43, 0.33, and 0.27). Morningness was associated with shorter time in bed (P < 0.0001, effect size 0.45) and time spent asleep (P < 0.005, effect size 0.26). For bedtime and rise-time stability the direction of effect was similar but mostly weaker. In retired seniors, a morning-type orientation and regularity in bedtimes and rise-times appear to be correlated with improved subjective sleep quality and with less time spent in bed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olson, R.A.
1995-05-01
In-stream habitats were quantified and qualified for nine stream channel-types. The channel types were identified using interpretations from stereo pairs of color and infrared aerial photographs. A total of 70 sites were sampled for streams located on the northwest portion of the Kenai Peninsula, in south-central Alaska. Channel-types were a significant predictor (P < 0.05) of the area (sq m) for 9 of 13 habitat types. Channel-types that had similar habitat composition, differed in size and depth of those habitats. Spawning habitat also appeared to be correlated to channel-type, however the within channel-type variability caused the differences to test non-significantmore » P < 0.05.« less
NASA Astrophysics Data System (ADS)
Amerioun, M. H.; Ghazi, M. E.; Izadifard, M.
2018-03-01
In this work, first the CuInS2 (CIS2) layers are deposited on Aluminum and polyethylene terephthalate (PET) as flexible substrates, and on glass and soda lime glass (SLG) as rigid substrates by the sol-gel method. Then the samples are analyzed by x-ray diffractomery (XRD) and atomic force microscope (AFM) to investigate the crystal structures and surface roughness of the samples. The I-V curve measurements and Seebeck effect setup are used to measure the electrical properties of the samples. The XRD data obtained for the CIS2 layers show that all the prepared samples have a single phase with a preferred orientation that is substrate-dependent. The samples grown on the rigid substrates had higher crystallite sizes. The results obtained for the optical measurements indicate the dependence of the band gap energy on the substrate type. The measured Seebeck coefficient showed that the carriers were of p-type in all the samples. According to the AFM images, the surface roughness also varied in the CIS2 layers with different substrates. In this regard, the type of substrate could be an important parameter for the final performance of the fabricated CIS2 cells.
Alternative sample sizes for verification dose experiments and dose audits
NASA Astrophysics Data System (ADS)
Taylor, W. A.; Hansen, J. M.
1999-01-01
ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.
Expectations and Support for Scholarly Activity in Schools of Business.
ERIC Educational Resources Information Center
Bohrer, Paul; Dolphin, Robert, Jr.
1985-01-01
Addresses issues relating to scholarship productivity and examines these issues with consideration given to the size and the accreditation status of the business schools sampled. First, how important is scholarly activity within an institution for a variety of personnel decisions? Second, what is the relative importance of various types of…
An Employer Needs Assessment for Vocational Education: Palomar Community College District.
ERIC Educational Resources Information Center
Muraski, Ed J.; Barker, Cherie
A study was conducted to determine the employment needs within the Palomar Community College District. Surveys were mailed to a stratified random sample of 600 North San Diego County employers, requesting respondents to provide information on type and size of business, to rank the occupational programs offered by Palomar according to employment…
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
ERIC Educational Resources Information Center
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Reference data set of volcanic ash physicochemical and optical properties
NASA Astrophysics Data System (ADS)
Vogel, A.; Diplas, S.; Durant, A. J.; Azar, A. S.; Sunding, M. F.; Rose, W. I.; Sytchkova, A.; Bonadonna, C.; Krüger, K.; Stohl, A.
2017-09-01
Uncertainty in the physicochemical and optical properties of volcanic ash particles creates errors in the detection and modeling of volcanic ash clouds and in quantification of their potential impacts. In this study, we provide a data set that describes the physicochemical and optical properties of a representative selection of volcanic ash samples from nine different volcanic eruptions covering a wide range of silica contents (50-80 wt % SiO2). We measured and calculated parameters describing the physical (size distribution, complex shape, and dense-rock equivalent mass density), chemical (bulk and surface composition), and optical (complex refractive index from ultraviolet to near-infrared wavelengths) properties of the volcanic ash and classified the samples according to their SiO2 and total alkali contents into the common igneous rock types basalt to rhyolite. We found that the mass density ranges between
Ahn, WonSool; Lee, Joon-Man
2015-11-01
The effects of MWCNT on the cell sizes, cell uniformities, thermal conductivities, bulk densities, foaming kinetics, and compressive mechanical properties of the rigid PUFs were investigated. To obtain the better uniform dispersed state of MWCNT, grease-type master batch of MWCNT/surfactant was prepared by three-roll mill. Average cell size of the PUF samples decreased from 185.1 for the neat PUF to 162.9 μm for the sample of 0.01 phr of MWCNT concentration. Cell uniformity was also enhanced showing the standard cell-size deviation of 61.7 and 35.2, respectively. While the thermal conductivity of the neat PUF was 0.0222 W/m(o)K, that of the sample with 0.01 phr of MWCNT showed 0.0204 W/m(o)K, resulting 8.2% reduction of the thermal conductivity. Bulk density of the PUF samples was observed as nearly the same values as 30.0 ± 1.0 g/cm3 regardless of MWCNT. Temperature profiles during foaming process showed that an indirect indication of the nucleation effect of MWCNT for the PUF foaming system, showing faster and higher temperature rising with time. The compressive yield stress is nearly the same as 0.030 x 10(5) Pa regardless of MWCNT.
NASA Astrophysics Data System (ADS)
Cyprych, Daria; Piazolo, Sandra; Wilson, Christopher J. L.; Luzin, Vladimir; Prior, David J.
2016-09-01
We utilize in situ neutron diffraction to continuously track the average grain size and crystal preferred orientation (CPO) development in ice, during uniaxial compression of two-phase and pure ice samples. Two-phase samples are composed of ice matrix and 20 vol.% of second phases of two types: (1) rheologically soft, platy graphite, and (2) rigid, rhomb-shaped calcite. The samples were tested at 10 °C below the ice melting point, ambient pressures, and two strain rates (1 ×10-5 and 2.5 ×10-6 s-1), to 10 and 20% strain. The final CPO in the ice matrix, where second phases are present, is significantly weaker, and ice grain size is smaller than in an ice-only sample. The microstructural and rheological data point to dislocation creep as the dominant deformation regime. The evolution and final strength of the CPO in ice depend on the efficiency of the recrystallization processes, namely grain boundary migration and nucleation. These processes are markedly influenced by the strength, shape, and grain size of the second phase. In addition, CPO development in ice is further accentuated by strain partitioning into the soft second phase, and the transfer of stress onto the rigid second phase.
Kellar, Nicholas M.; Catelani, Krista N.; Robbins, Michelle N.; Trego, Marisa L.; Allen, Camryn D.; Danil, Kerri; Chivers, Susan J.
2015-01-01
When paired with dart biopsying, quantifying cortisol in blubber tissue may provide an index of relative stress levels (i.e., activation of the hypothalamus-pituitary-adrenal axis) in free-ranging cetacean populations while minimizing the effects of the act of sampling. To validate this approach, cortisol was extracted from blubber samples collected from beach-stranded and bycaught short-beaked common dolphins using a modified blubber steroid isolation technique and measured via commercially available enzyme immunoassays. The measurements exhibited appropriate quality characteristics when analyzed via a bootstraped stepwise parallelism analysis (observed/expected = 1.03, 95%CI: 99.6 – 1.08) and showed no evidence of matrix interference with increasing sample size across typical biopsy tissue masses (75–150mg; r2 = 0.012, p = 0.78, slope = 0.022ngcortisol deviation/ultissue extract added). The relationships between blubber cortisol and eight potential cofactors namely, 1) fatality type (e.g., stranded or bycaught), 2) specimen condition (state of decomposition), 3) total body length, 4) sex, 5) sexual maturity state, 6) pregnancy status, 7) lactation state, and 8) adrenal mass, were assessed using a Bayesian generalized linear model averaging technique. Fatality type was the only factor correlated with blubber cortisol, and the magnitude of the effect size was substantial: beach-stranded individuals had on average 6.1-fold higher cortisol levels than those of bycaught individuals. Because of the difference in conditions surrounding these two fatality types, we interpret this relationship as evidence that blubber cortisol is indicative of stress response. We found no evidence of seasonal variation or a relationship between cortisol and the remaining cofactors. PMID:25643144
(Sample) Size Matters: Best Practices for Defining Error in Planktic Foraminiferal Proxy Records
NASA Astrophysics Data System (ADS)
Lowery, C.; Fraass, A. J.
2016-02-01
Paleoceanographic research is a vital tool to extend modern observational datasets and to study the impact of climate events for which there is no modern analog. Foraminifera are one of the most widely used tools for this type of work, both as paleoecological indicators and as carriers for geochemical proxies. However, the use of microfossils as proxies for paleoceanographic conditions brings about a unique set of problems. This is primarily due to the fact that groups of individual foraminifera, which usually live about a month, are used to infer average conditions for time periods ranging from hundreds to tens of thousands of years. Because of this, adequate sample size is very important for generating statistically robust datasets, particularly for stable isotopes. In the early days of stable isotope geochemistry, instrumental limitations required hundreds of individual foraminiferal tests to return a value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. While this has many advantages, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB, or 1°C. Here, we demonstrate the use of this tool to quantify error in micropaleontological datasets, and suggest best practices for minimizing error when generating stable isotope data with foraminifera.
Salmonella Typhimurium DT193 and DT99 are present in great and blue tits in Flanders, Belgium
Verbrugghe, E.; Dekeukeleire, D.; De Beelde, R.; Rouffaer, L. O.; Haesendonck, R.; Strubbe, D.; Mattheus, W.; Bertrand, S.; Pasmans, F.; Bonte, D.; Verheyen, K.; Lens, L.; Martel, A.
2017-01-01
Endemic infections with the common avian pathogen Salmonella enterica subspecies enterica serovar Typhimurium (Salmonella Typhimurium) may incur a significant cost on the host population. In this study, we determined the potential of endemic Salmonella infections to reduce the reproductive success of blue (Cyanistes caeruleus) and great (Parus major) tits by correlating eggshell infection with reproductive parameters. The fifth egg of each clutch was collected from nest boxes in 19 deciduous forest fragments. Out of the 101 sampled eggs, 7 Salmonella Typhimurium isolates were recovered. The low bacterial prevalence was reflected by a similarly low serological prevalence in the fledglings. In this study with a relatively small sample size, presence of Salmonella did not affect reproductive parameters (egg volume, clutch size, number of nestlings and number of fledglings), nor the health status of the fledglings. However, in order to clarify the impact on health and reproduction a larger number of samples have to be analyzed. Phage typing showed that the isolates belonged to the definitive phage types (DT) 193 and 99, and multi-locus variable number tandem repeat analysis (MLVA) demonstrated a high similarity among the tit isolates, but distinction to human isolates. These findings suggest the presence of passerine-adapted Salmonella strains in free-ranging tit populations with host pathogen co-existence. PMID:29112955
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawellek, Nicole; Krivov, Alexander V.; Marshall, Jonathan P.
The radii of debris disks and the sizes of their dust grains are important tracers of the planetesimal formation mechanisms and physical processes operating in these systems. Here we use a representative sample of 34 debris disks resolved in various Herschel Space Observatory (Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA) programs to constrain the disk radii and the size distribution of their dust. While we modeled disks with both warm and cold components, and identified warm inner disks around about two-thirds of the stars, we focusmore » our analysis only on the cold outer disks, i.e., Kuiper-belt analogs. We derive the disk radii from the resolved images and find a large dispersion for host stars of any spectral class, but no significant trend with the stellar luminosity. This argues against ice lines as a dominant player in setting the debris disk sizes, since the ice line location varies with the luminosity of the central star. Fixing the disk radii to those inferred from the resolved images, we model the spectral energy distribution to determine the dust temperature and the grain size distribution for each target. While the dust temperature systematically increases toward earlier spectral types, the ratio of the dust temperature to the blackbody temperature at the disk radius decreases with the stellar luminosity. This is explained by a clear trend of typical sizes increasing toward more luminous stars. The typical grain sizes are compared to the radiation pressure blowout limit s {sub blow} that is proportional to the stellar luminosity-to-mass ratio and thus also increases toward earlier spectral classes. The grain sizes in the disks of G- to A-stars are inferred to be several times s {sub blow} at all stellar luminosities, in agreement with collisional models of debris disks. The sizes, measured in the units of s {sub blow}, appear to decrease with the luminosity, which may be suggestive of the disk's stirring level increasing toward earlier-type stars. The dust opacity index β ranges between zero and two, and the size distribution index q varies between three and five for all the disks in the sample.« less
Karlicic, Vukoica; Vukovic, Jelena; Stanojevic, Ivan; Sotirovic, Jelena; Peric, Aleksandar; Jovic, Milena; Cvijanovic, Vlado; Djukic, Mirjana; Banovic, Tatjana; Vojvodic, Danilo
2016-01-01
Advanced lung carcinoma is charasterized with fast disease progression. Interleukin (IL)10 and transforming growth factor (TGF)b1 are immunosuppressive mediators and their role in lung carcinoma pathogenesis and in the antitumor response has not yet been elucidated. The purpose of this study was to correlate IL10 and TGFb1 levels in the serum and lung tumor microcirculation with clinical stage, disease extent, histological features and TNM stage. The study included 41 lung cancer patients in clinical stage III and IV. Histological type was determined immunohistochemically, while tumor size, localization and dissemination were determined radiologically by multislice computerized tomography (MSCT). IL10 and TGFb1 levels were quantified with commercial flow cytometric test in serum and lung tumor microcirculation samples. Non small cell lung cancer (NSCLC) patients had significantly elevated TGFb1 while small cell lung cancer (SCLC) patients had significantly increased IL10 in tumor microcirculation. IL10 was significantly elevated in patients with the largest tumors, as well as in patients with III clinical stage and without metastases, both in the serum and tumor microcirculation. TGFb1 was significantly increased in serum and tumor microcirculation in patients with larger tumors. We found significant correlation between these two immunosuppressive cytokines, IL10 and TGFb1, in tumor microcirculation but not in patient serum samples. IL10 and TGFb1 in systemic and tumor microcirculation are significantly associated with particular histological type of lung cancer, tumor size and degree of disease extent.
Task-based exposure assessment of nanoparticles in the workplace
NASA Astrophysics Data System (ADS)
Ham, Seunghon; Yoon, Chungsik; Lee, Euiseung; Lee, Kiyoung; Park, Donguk; Chung, Eunkyo; Kim, Pilje; Lee, Byoungcheun
2012-09-01
Although task-based sampling is, theoretically, a plausible approach to the assessment of nanoparticle exposure, few studies using this type of sampling have been published. This study characterized and compared task-based nanoparticle exposure profiles for engineered nanoparticle manufacturing workplaces (ENMW) and workplaces that generated welding fumes containing incidental nanoparticles. Two ENMW and two welding workplaces were selected for exposure assessments. Real-time devices were utilized to characterize the concentration profiles and size distributions of airborne nanoparticles. Filter-based sampling was performed to measure time-weighted average (TWA) concentrations, and off-line analysis was performed using an electron microscope. Workplace tasks were recorded by researchers to determine the concentration profiles associated with particular tasks/events. This study demonstrated that exposure profiles differ greatly in terms of concentrations and size distributions according to the task performed. The size distributions recorded during tasks were different from both those recorded during periods with no activity and from the background. The airborne concentration profiles of the nanoparticles varied according to not only the type of workplace but also the concentration metrics. The concentrations measured by surface area and the number concentrations measured by condensation particle counter, particulate matter 1.0, and TWA mass concentrations all showed a similar pattern, whereas the number concentrations measured by scanning mobility particle sizer indicated that the welding fume concentrations at one of the welding workplaces were unexpectedly higher than were those at workplaces that were engineering nanoparticles. This study suggests that a task-based exposure assessment can provide useful information regarding the exposure profiles of nanoparticles and can therefore be used as an exposure assessment tool.
Price promotions for food and beverage products in a nationwide sample of food stores.
Powell, Lisa M; Kumanyika, Shiriki K; Isgor, Zeynep; Rimkus, Leah; Zenk, Shannon N; Chaloupka, Frank J
2016-05-01
Food and beverage price promotions may be potential targets for public health initiatives but have not been well documented. We assessed prevalence and patterns of price promotions for food and beverage products in a nationwide sample of food stores by store type, product package size, and product healthfulness. We also assessed associations of price promotions with community characteristics and product prices. In-store data collected in 2010-2012 from 8959 food stores in 468 communities spanning 46 U.S. states were used. Differences in the prevalence of price promotions were tested across stores types, product varieties, and product package sizes. Multivariable regression analyses examined associations of presence of price promotions with community racial/ethnic and socioeconomic characteristics and with product prices. The prevalence of price promotions across all 44 products sampled was, on average, 13.4% in supermarkets (ranging from 9.1% for fresh fruits and vegetables to 18.2% for sugar-sweetened beverages), 4.5% in grocery stores (ranging from 2.5% for milk to 6.6% for breads and cereals), and 2.6% in limited service stores (ranging from 1.2% for fresh fruits and vegetables to 4.1% for breads and cereals). No differences were observed by community characteristics. Less-healthy versus more-healthy product varieties and larger versus smaller product package sizes generally had a higher prevalence of price promotion, particularly in supermarkets. On average, in supermarkets, price promotions were associated with 15.2% lower prices. The observed patterns of price promotions warrant more attention in public health food environment research and intervention. Copyright © 2016 Elsevier Inc. All rights reserved.
Cuss, C W; Guéguen, C
2013-09-01
Dissolved organic matter (DOM) was leached from eight distinct samples of leaves taken from six distinct trees (red maple, bur oak at three times of the year, two sugar maple and two white spruce trees from disparate soil types). Multiple samples were taken over 72-96h of leaching. The size and optical properties of leachates were assessed using asymmetrical flow field-flow fractionation (AF4) coupled to diode-array ultraviolet/visible absorbance and excitation-emission matrix fluorescence detectors (EEM). The fluorescence of unfractionated samples was also analyzed. EEMs were analyzed using parallel factor analysis (PARAFAC) and principal component analysis (PCA) of proportional component loadings. Both the unfractionated and AF4-fractionated leachates had distinct size and optical properties. The 95% confidence ranges for molecular weight distributions were determined as: 210-440Da for spruce, 540-920Da for sugar maple, 630-800Da for spring oak leaves, 930-950Da for senescent oak, 1490-1670 for senescent red maple, and 3430-4270Da for oak leaves that were collected from the ground after spring thaw. In most cases the fluorescence properties of leachates were different for individuals from different soil types and across seasons; however, PCA of PARAFAC loadings revealed that the observed distinctiveness was chiefly species-based. Strong correlations were found between the molecular weight distribution of both unfractionated and fractionated leachates and their principal component loadings (R(2)=0.85 and 0.95, respectively). It is concluded that results support a species-based origin for differences in optical properties. Copyright © 2013 Elsevier Ltd. All rights reserved.
[Analysis of the patient safety culture in hospitals of the Spanish National Health System].
Saturno, P J; Da Silva Gama, Z A; de Oliveira-Sousa, S L; Fonseca, Y A; de Souza-Oliveira, A C; Castillo, Carmen; López, M José; Ramón, Teresa; Carrillo, Andrés; Iranzo, M Dolores; Soria, Victor; Saturno, Pedro J; Parra, Pedro; Gomis, Rafael; Gascón, Juan José; Martinez, José; Arellano, Carmen; Gama, Zenewton A Da Silva; de Oliveira-Sousa, Silvana L; de Souza-Oliveira, Adriana C; Fonseca, Yadira A; Ferreira, Marta Sobral
2008-12-01
A safety culture is essential to minimize errors and adverse events. Its measurement is needed to design activities in order to improve it. This paper describes the methods and main results of a study on safety climate in a nation-wide representative sample of public hospitals of the Spanish NHS. The Hospital Survey on Patient Safety Culture questionnaire was distributed to a random sample of health professionals in a representative sample of 24 hospitals, proportionally stratified by hospital size. Results are analyzed to provide a description of safety climate, its strengths and weaknesses. Differences by hospital size, type of health professional and service are analyzed using ANOVA. A total of 2503 responses are analyzed (response rate: 40%, (93% from professionals with direct patient contact). A total of 50% gave patient safety a score from 6 to 8 (on a 10-point scale); 95% reported < 2 events last year. Dimensions "Teamwork within hospital units" (71.8 [1.8]) and "Supervisor/Manager expectations and actions promoting safety" (61.8 [1.7]) have the highest percentage of positive answers. "Staffing", "Teamwork across hospital units", "Overall perceptions of safety" and "Hospital management support for patient safety" could be identified as weaknesses. Significant differences by hospital size, type of professional and service suggest a generally more positive attitude in small hospitals and Pharmacy services, and a more negative one in physicians. Strengths and weaknesses of the safety climate in the hospitals of the Spanish NHS have been identified and they are used to design appropriate strategies for improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aznar, Alexandra; Day, Megan; Doris, Elizabeth
The report analyzes and presents information learned from a sample of 20 cities across the United States, from New York City to Park City, Utah, including a diverse sample of population size, utility type, region, annual greenhouse gas reduction targets, vehicle use, and median household income. The report compares climate, sustainability, and energy plans to better understand where cities are taking energy-related actions and how they are measuring impacts. Some common energy-related goals focus on reducing city-wide carbon emissions, improving energy efficiency across sectors, increasing renewable energy, and increasing biking and walking.
NASA Astrophysics Data System (ADS)
May, J. C.; Rey, L.; Lee, Chi-Jen
2002-03-01
Molecular sizing potency results are presented for irradiated samples of one lot of Haemophilus b conjugate vaccine, pneumococcal polysaccharide type 6B and typhoid vi polysaccharide vaccine. The samples were irradiated (25 kGy) by gamma rays, electron beams and X-rays. IgG and IgM antibody response in mice test results (ELISA) are given for the Hib conjugate vaccine irradiated at 0°C or frozen in liquid nitrogen.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khodaei, Azin, E-mail: Azin.Khodaei@gmail.com; Hasannasab, Malihe; Amousoltani, Narges
2016-02-15
Highlights: • Ni ultrafine/nanoparticles were produced using the single-step ELGC method. • Ar and He–20%Ar gas mixtures were used as the condensing gas under 1 atm. • Effects of gas type and flow rate on particle size distribution were investigated. • The nanoparticles showed both high saturation magnetization and low coercivity. - Abstract: In this work, Ni ultrafine/nanoparticles were directly produced using the one-step, relatively large-scale electromagnetic levitational gas condensation method. In this process, Ni vapors ascending from the levitated droplet were condensed by Ar and He–20%Ar gas mixtures under atmospheric pressure. Effects of type and flow rate of themore » condensing gas on the size, size distribution and crystallinity of Ni particles were investigated. The particles were characterized by scanning electron microscopy, X-ray diffraction and vibrating sample magnetometer (VSM). The process parameters for the synthesis of the crystalline Ni ultrafine/nanoparticles were determined.« less
Melvin, Elizabeth M.; Moore, Brandon R.; Gilchrist, Kristin H.; Grego, Sonia; Velev, Orlin D.
2011-01-01
The recent development of microfluidic “lab on a chip” devices requiring sample sizes <100 μL has given rise to the need to concentrate dilute samples and trap analytes, especially for surface-based detection techniques. We demonstrate a particle collection device capable of concentrating micron-sized particles in a predetermined area by combining AC electroosmosis (ACEO) and dielectrophoresis (DEP). The planar asymmetric electrode pattern uses ACEO pumping to induce equal, quadrilateral flow directed towards a stagnant region in the center of the device. A number of system parameters affecting particle collection efficiency were investigated including electrode and gap width, chamber height, applied potential and frequency, and number of repeating electrode pairs and electrode geometry. The robustness of the on-chip collection design was evaluated against varying electrolyte concentrations, particle types, and particle sizes. These devices are amenable to integration with a variety of detection techniques such as optical evanescent waveguide sensing. PMID:22662040
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, B.B.; Ripp, J.; Sims, R.C.
The Electric Power Research Institute (EPRI) is studying the environmental impact of preservatives associated with in-service utility poles. As part of this endeavor, two EPRI contractors, META Environmental, Inc. (META) and Atlantic Environmental Services, Inc. (Atlantic), have collected soil samples from around wood utility poles nationwide, for various chemical and physical analyses. This report covers the results for 107 pole sites in the US. These pole sites included a range of preservative types, soil types, wood types, pole sizes, and in-service ages. The poles in this study were preserved with one of two types of preservative: pentachlorophenol (PCP) or creosote.more » Approximately 40 to 50 soil samples were collected from each wood pole site in this study. The soil samples collected from the pole sites were analyzed for chlorinated phenols and total petroleum hydrocarbons (TPH) if the pole was preserved with PCP, or for polycyclic aromatic hydrocarbons (PAHs) if the pole was preserved with creosote. The soil samples were also analyzed for physical/chemical parameters, such as pH, total organic carbon (TOC), and cationic exchange capacity (CEC). Additional samples were used in studies to determine biological degradation rates, and soil-water distribution and retardation coefficients of PCP in site soils. Methods of analysis followed standard EPA and ASTM methods, with some modifications in the chemical analyses to enable the efficient processing of many samples with sufficiently low detection limits for this study. All chemical, physical, and site-specific data were stored in a relational computer database.« less
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
Characterizing Particle Size Distributions of Crystalline Silica in Gold Mine Dust
Chubb, Lauren G.; Cauda, Emanuele G.
2017-01-01
Dust containing crystalline silica is common in mining environments in the U.S. and around the world. The exposure to respirable crystalline silica remains an important occupational issue and it can lead to the development of silicosis and other respiratory diseases. Little has been done with regard to the characterization of the crystalline silica content of specific particle sizes of mine-generated dust. Such characterization could improve monitoring techniques and control technologies for crystalline silica, decreasing worker exposure to silica and preventing future incidence of silicosis. Three gold mine dust samples were aerosolized in a laboratory chamber. Particle size-specific samples were collected for gravimetric analysis and for quantification of silica using the Microorifice Uniform Deposit Impactor (MOUDI). Dust size distributions were characterized via aerodynamic and scanning mobility particle sizers (APS, SMPS) and gravimetrically via the MOUDI. Silica size distributions were constructed using gravimetric data from the MOUDI and proportional silica content corresponding to each size range of particles collected by the MOUDI, as determined via X-ray diffraction and infrared spectroscopic quantification of silica. Results indicate that silica does not comprise a uniform proportion of total dust across all particle sizes and that the size distributions of a given dust and its silica component are similar but not equivalent. Additional research characterizing the silica content of dusts from a variety of mine types and other occupational environments is necessary in order to ascertain trends that could be beneficial in developing better monitoring and control strategies. PMID:28217139
Log-Normal Distribution of Cosmic Voids in Simulations and Mocks
NASA Astrophysics Data System (ADS)
Russell, E.; Pycke, J.-R.
2017-01-01
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.
The enigmatic molar from Gondolin, South Africa: implications for Paranthropus paleobiology.
Grine, Frederick E; Jacobs, Rachel L; Reed, Kaye E; Plavcan, J Michael
2012-10-01
The specific attribution of the large hominin M(2) (GDA-2) from Gondolin has significant implications for the paleobiology of Paranthropus. If it is a specimen of Paranthropus robustus it impacts that species' size range, and if it belongs to Paranthropus boisei it has important biogeographic implications. We evaluate crown size, cusp proportions and the likelihood of encountering a large-bodied mammal species in both East and South Africa in the Early Pleistocene. The tooth falls well outside the P. robustus sample range, and comfortably within that for penecontemporaneous P. boisei. Analyses of sample range, distribution and variability suggest that it is possible, albeit unlikely to find a M(2) of this size in the current P. robustus sample. However, taphonomic agents - carnivore (particularly leopard) feeding behaviors - have likely skewed the size distribution of the Swartkrans and Drimolen P. robustus assemblage. In particular, assemblages of large-bodied mammals accumulated by leopards typically display high proportions of juveniles and smaller adults. The skew in the P. robustus sample is consistent with this type of assemblage. Morphological evidence in the form of cusp proportions is congruent with GDA-2 representing P. robustus rather than P. boisei. The comparatively small number of large-bodied mammal species common to both South and East Africa in the Early Pleistocene suggests a low probability of encountering an herbivorous australopith in both. Our results are most consistent with the interpretation of the Gondolin molar as a very large specimen of P. robustus. This, in turn, suggests that large, presumptive male, specimens are rare, and that the levels of size variation (sexual dimorphism) previously ascribed to this species are likely to be gross underestimates. Copyright © 2012 Elsevier Ltd. All rights reserved.
Queen, Robin M; Franck, Christopher T; Schmitt, Daniel; Adams, Samuel B
2017-10-01
Total ankle arthroplasty (TAA) is an alternative to arthrodesis, but no randomized trial has examined whether a fixed bearing or mobile bearing implant provides improved gait mechanics. We wished to determine if fixed- or mobile-bearing TAA results in a larger improvement in pain scores and gait mechanics from before surgery to 1 year after surgery, and to quantify differences in outcomes using statistical analysis and report the standardized effect sizes for such comparisons. Patients with end-stage ankle arthritis who were scheduled for TAA between November 2011 and June 2013 (n = 40; 16 men, 24 women; average age, 63 years; age range, 35-81 years) were prospectively recruited for this study from a single foot and ankle orthopaedic clinic. During this period, 185 patients underwent TAA, with 144 being eligible to participate in this study. Patients were eligible to participate if they were able to meet all study inclusion criteria, which were: no previous diagnosis of rheumatoid arthritis, a contralateral TAA, bilateral ankle arthritis, previous revision TAA, an ankle fusion revision, or able to walk without the use of an assistive device, weight less than 250 pounds (114 kg), a sagittal or coronal plane deformity less than 15°, no presence of avascular necrosis of the distal tibia, no current neuropathy, age older than 35 years, no history of a talar neck fracture, or an avascular talus. Of the 144 eligible patients, 40 consented to participate in our randomized trial. These 40 patients were randomly assigned to either the fixed (n = 20) or mobile bearing implant group (n = 20). Walking speed, bilateral peak dorsiflexion angle, peak plantar flexion angle, sagittal plane ankle ROM, peak ankle inversion angle, peak plantar flexion moment, peak plantar flexion power during stance, peak weight acceptance, and propulsive vertical ground reaction force were analyzed during seven self-selected speed level walking trials for 33 participants using an eight-camera motion analysis system and four force plates. Seven patients were not included in the analysis owing to cancelled surgery (one from each group) and five were lost to followup (four with fixed bearing and one with mobile bearing implants). A series of effect-size calculations and two-sample t-tests comparing postoperative and preoperative increases in outcome variables between implant types were used to determine the differences in the magnitude of improvement between the two patient cohorts from before surgery to 1 year after surgery. The sample size in this study enabled us to detect a standardized shift of 1.01 SDs between group means with 80% power and a type I error rate of 5% for all outcome variables in the study. This randomized trial did not reveal any differences in outcomes between the two implant types under study at the sample size collected. In addition to these results, effect size analysis suggests that changes in outcome differ between implant types by less than 1 SD. Detection of the largest change score or observed effect (propulsive vertical ground reaction force [Fixed: 0.1 ± 0.1; 0.0-1.0; Mobile: 0.0 ± 0.1; 0.0-0.0; p = 0.0.051]) in this study would require a future trial to enroll 66 patients. However, the smallest change score or observed effect (walking speed [Fixed: 0.2 ± 0.3; 0.1-0.4; Mobile: 0.2 ± 0.3; 0.0-0.3; p = 0.742]) requires a sample size of 2336 to detect a significant difference with 80% power at the observed effect sizes. To our knowledge, this is the first randomized study to report the observed effect size comparing improvements in outcome measures between fixed and mobile bearing implant types. This study was statistically powered to detect large effects and descriptively analyze observed effect sizes. Based on our results there were no statistically or clinically meaningful differences between the fixed and mobile bearing implants when examining gait mechanics and pain 1 year after TAA. Level II, therapeutic study.
Movements of northern flying squirrels in different-aged forest stands of western Oregon
Martin, K.J.; Anthony, R.G.
1999-01-01
In western Oregon, northern flying squirrels (Glaucomys sabrinus) are the primary prey species for northern spotted owls (Strix occidentalis caurina), an old-growth associated species. To assess differences between old-growth and second-growth habitat, we livetrapped and radiotagged 39 northern flying squirrels to estimate their home range sizes and describe movements in 2 old-growth and 2 second-growth conifer forest stands in the Cascade Mountains of central Oregon. Sampling periods were summer and fall of 1991-92. Home range sizes averaged 4.9 ha and did not differ (P > 0.30) between the 2 stand types. Male northern flying squirrels had larger (P ??? 0.03) mean home ranges (5.9 ?? 0.8 ha; ?? ?? SE; n = 20) than females (3.9 ?? 0.4 ha; n = 19). Northern flying squirrel movement distances between successive, noncorrelated telemetry locations averaged 71 m (n = 1,090). No correlation was found between distances moved and stand type or sex. Northern flying squirrel's home range sizes, movements, and densities were similar between the 2 stand types. We suggest abundance and movements of northern flying squirrels are not influencing the preferential selection of oldgrowth forests by northern spotted owls.
Structural elucidation and magnetic behavior evaluation of Cu-Cr doped BaCo-X hexagonal ferrites
NASA Astrophysics Data System (ADS)
Azhar Khan, Muhammad; Hussain, Farhat; Rashid, Muhammad; Mahmood, Asif; Ramay, Shahid M.; Majeed, Abdul
2018-04-01
Ba2-xCuxCo2CryFe28-yO46 (x = 0.0, 0.1, 0.2, 0.3, 0.4, y = 0.0, 0.2, 0.4, 0.6, 0.8) X-type hexagonal ferrites were synthesized via micro-emulsion route. The techniques which were applied to characterize the prepared samples are as follows: X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), Dielectric measurements and vibrating sample magnetometer (VSM). The structural parameters i.e. lattice constant (a, c), cell volume (V), X-ray density, bulk density and crystallite size of all the prepared samples were obtained using XRD analysis. The lattice parameters 'a' and 'c' increase from 5.875 Å to 5.934 Å and 83.367 Å to 83.990 Å respectively. The crystallite size of investigated samples lies in the range of 28-32 nm. The magnetic properties of all samples have been calculated by vibrating sample magnetometer (VSM) analysis. The increase in coercivity (Hc) was observed with the increase of doping contents. It was observed that the coercivity (Hc) of all prepared samples is inversely related to the crystalline size which reflects that all materials are super-paramagnetic. The dielectric parameters i.e. dielectric constant, dielectric loss, tangent loss etc were obtained in the frequency range of 1 MHz-3 GHz and followed the Maxwell-Wagner's model. The significant variation the dielectric parameters are observed with increasing frequency. The maximum Q value is obtained at ∼2 GHz due to which these materials are used for high frequency multilayer chip inductors.
Classifier performance prediction for computer-aided diagnosis using a limited dataset.
Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir
2008-04-01
In a practical classifier design problem, the true population is generally unknown and the available sample is finite-sized. A common approach is to use a resampling technique to estimate the performance of the classifier that will be trained with the available sample. We conducted a Monte Carlo simulation study to compare the ability of the different resampling techniques in training the classifier and predicting its performance under the constraint of a finite-sized sample. The true population for the two classes was assumed to be multivariate normal distributions with known covariance matrices. Finite sets of sample vectors were drawn from the population. The true performance of the classifier is defined as the area under the receiver operating characteristic curve (AUC) when the classifier designed with the specific sample is applied to the true population. We investigated methods based on the Fukunaga-Hayes and the leave-one-out techniques, as well as three different types of bootstrap methods, namely, the ordinary, 0.632, and 0.632+ bootstrap. The Fisher's linear discriminant analysis was used as the classifier. The dimensionality of the feature space was varied from 3 to 15. The sample size n2 from the positive class was varied between 25 and 60, while the number of cases from the negative class was either equal to n2 or 3n2. Each experiment was performed with an independent dataset randomly drawn from the true population. Using a total of 1000 experiments for each simulation condition, we compared the bias, the variance, and the root-mean-squared error (RMSE) of the AUC estimated using the different resampling techniques relative to the true AUC (obtained from training on a finite dataset and testing on the population). Our results indicated that, under the study conditions, there can be a large difference in the RMSE obtained using different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Under this type of conditions, the 0.632 and 0.632+ bootstrap methods have the lowest RMSE, indicating that the difference between the estimated and the true performances obtained using the 0.632 and 0.632+ bootstrap will be statistically smaller than those obtained using the other three resampling methods. Of the three bootstrap methods, the 0.632+ bootstrap provides the lowest bias. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited dataset.
Stankiewicz, B.A.; Kruge, M.A.; Crelling, J.C.; Salmon, G.L.
1994-01-01
Samples of organic matter from nine well-known geological units (Green River Fm., Tasmanian Tasmanite, Lower Toarcian Sh. of the Paris Basin, Duwi Fm., New Albany Sh., Monterey Fm., Herrin No. 6 coal, Eocene coal, and Miocene lignite from Kalimantan) were processed by density gradient centrifugation (DGC) to isolate the constituent macerals. Optimal separation, as well as the liberation of microcrystalline pyrite from the organic matter, was obtained by particle size minimization prior to DGC by treatment with liquid N2 and micronization in a fluid energy mill. The resulting small particle size limits the use of optical microscopy, thus microfluorimetry and analytical pyrolysis were also employed to assess the quality and purity of the fractions. Each of the samples exhibits one dominant DGC peak (corresponding to alginite in the Green River Fm., amorphinite in the Lower Toarcian Sh., vitrinite in the Herrin No. 6, etc.) which shifts from 1.05 g mL-1 for the Type I kerogens to between 1.18 and 1.23 g mL-1 for Type II and II-S. The characteristic densities for Type III organic matter are greater still, being 1.27 g mL-1 for the hydrogen-rich Eocene coal, 1.29 g mL-1 for the Carboniferous coal and 1.43 g mL-1 for the oxygen-rich Miocene lignite. Among Type II kerogens, the DGC profile represents a compositional continuum from undegraded alginite through (bacterial) degraded amorphinite; therefore chemical and optical properties change gradually with increasing density. The separation of useful quantities of macerals that occur in only minor amounts is difficult. Such separations require large amounts of starting material and require multiple processing steps. Complete maceral separation for some samples using present methods seems remote. Samples containing macerals with significant density differences due to heteroatom diversity (e.g., preferential sulfur or oxygen concentration in the one maceral), on the other hand, may be successfully separated (e.g., coals and Monterey kerogen). ?? 1994 American Chemical Society.
NASA Astrophysics Data System (ADS)
Hegazy, Ahmad K.; Kabiel, Hanan F.
2007-05-01
Anastatica hierochuntica L. (Brassicaceae) is a desert monocarpic annual species characterized by a topochory/ombrohydrochory type of seed dispersal. The hygrochastic nature of the dry skeletons (dead individuals) permits controlling seed dispersal by rain events. The amount of dispersed seeds is proportional to the intensity of rainfall. When light showers occur, seeds are released and remain in the site. Seeds dispersed in the vicinity of the mother or source plant (primary type of seed dispersal) resulted in clumped pattern and complicated interrelationships among size-classes of the population. Following heavy rainfall, most seeds are released and transported into small patches and shallow depressions which collect runoff water. The dead A. hierochuntica skeletons demonstrate site-dependent size-class structure, spatial pattern and spatial interrelationships in different microhabitats. Four microhabitat types have been sampled: runnels, patches and simple and compound depressions in two sites (gravel and sand). Ripley's K-function was used to analyze the spatial pattern in populations of A. hierochuntica skeletons in the study microhabitats. Clumped patterns were observed in nearly all of the study microhabitats. Populations of A. hierochuntica in the sand site were more productive than in the gravel site and usually had more individuals in the larger size-classes. In the compound-depression microhabitat, the degree of clumping decreased from the core zone to the intermediate zone then shifted into overdispersed pattern in the outer zone. At the within size-class level, the clumped pattern dominated in small size classes but shifted into random and overdispersed patterns in the larger size classes. Aggregation between small and large size-classes was not well-defined but large individuals were found closer to the smaller individuals than to those of their own class. In relation to the phytomass and the size-class structure, the outer zone of the simple depression and the outer and intermediate zones of the compound depression microhabitats were the most productive sites.
Identifying deformation mechanisms in the NEEM ice core using EBSD measurements
NASA Astrophysics Data System (ADS)
Kuiper, Ernst-Jan; Weikusat, Ilka; Drury, Martyn R.; Pennock, Gill M.; de Winter, Matthijs D. A.
2015-04-01
Deformation of ice in continental sized ice sheets determines the flow behavior of ice towards the sea. Basal dislocation glide is assumed to be the dominant deformation mechanism in the creep deformation of natural ice, but non-basal glide is active as well. Knowledge of what types of deformation mechanisms are active in polar ice is critical in predicting the response of ice sheets in future warmer climates and its contribution to sea level rise, because the activity of deformation mechanisms depends critically on deformation conditions (such as temperature) as well as on the material properties (such as grain size). One of the methods to study the deformation mechanisms in natural materials is Electron Backscattered Diffraction (EBSD). We obtained ca. 50 EBSD maps of five different depths from a Greenlandic ice core (NEEM). The step size varied between 8 and 25 micron depending on the size of the deformation features. The size of the maps varied from 2000 to 10000 grid point. Indexing rates were up to 95%, partially by saving and reanalyzing the EBSP patterns. With this method we can characterize subgrain boundaries and determine the lattice rotation configurations of each individual subgrain. Combining these observations with arrangement/geometry of subgrain boundaries the dislocation types can be determined, which form these boundaries. Three main types of subgrain boundaries have been recognized in Antarctic (EDML) ice core¹². Here, we present the first results obtained from EBSD measurements performed on the NEEM ice core samples from the last glacial period, focusing on the relevance of dislocation activity of the possible slip systems. Preliminary results show that all three subgrain types, recognized in the EDML core, occur in the NEEM samples. In addition to the classical boundaries made up of basal dislocations, subgrain boundaries made of non-basal dislocations are also common. ¹Weikusat, I.; de Winter, D. A. M.; Pennock, G. M.; Hayles, M.; Schneijdenberg, C. T. W. M. Drury, M. R. Cryogenic EBSD on ice: preserving a stable surface in a low pressure SEM. J. Microsc., 2010, doi: 10.1111/j.1365-2818.2010.03471.x ²Weikusat, I.; Miyamoto, A.; Faria, S. H.; Kipfstuhl, S.; Azuma, N.; Hondoh. T. Subgrain boundaries in Antarctic ice quantified by X-ray Laue diffraction. J. of Glaciol., 2011, 57, 85-94
Suhaili, Zarizal; Lean, Soo-Sum; Mohamad, Noor Muzamil; Rachman, Abdul R Abdul; Desa, Mohd Nasir Mohd; Yeo, Chew Chieng
2016-09-01
Most of the efforts in elucidating the molecular relatedness and epidemiology of Staphylococcus aureus in Malaysia have been largely focused on methicillin-resistant S. aureus (MRSA). Therefore, here we report the draft genome sequence of the methicillin-susceptible Staphylococcus aureus (MSSA) with sequence type 1 (ST1), spa type t127 with Panton-Valentine Leukocidin (pvl) pathogenic determinant isolated from pus sample designated as KT/314250 strain. The size of the draft genome is 2.86 Mbp with 32.7% of G + C content consisting 2673 coding sequences. The draft genome sequence has been deposited in DDBJ/EMBL/GenBank under the accession number AOCP00000000.
Manies, Kristen L.; Harden, Jennifer W.; Silva, Steven R.; Briggs, Paul H.; Schmid, Brian M.
2004-01-01
The U.S. Geological Survey project Fate of Carbon in Alaskan Landscapes (FOCAL) is studying the effect of fire and soil drainage on soil carbon storage in the boreal forest. This project has selected several sites to study within central Alaska of varying ages (time since fire) and soil drainage types. This report describes the location of these sampling sites, as well as the procedures used to describe, sample, and analyze the soils. This report also contains data tables with this information, including, but not limited to field descriptions, bulk density, particle size distribution, moisture content, carbon (C) concentration, nitrogen (N) concentration, isotopic data for C, and major, minor and trace elemental concentration.
Nikfarjam, Ali; Shokoohi, Mostafa; Shahesmaeili, Armita; Haghdoost, Ali Akbar; Baneshi, Mohammad Reza; Haji-Maghsoudi, Saiedeh; Rastegari, Azam; Nasehi, Abbas Ali; Memaryan, Nadereh; Tarjoman, Termeh
2016-05-01
For a better understanding of the current situation of drug use in Iran, we utilized the network scale-up approach to estimate the prevalence of illicit drug use in the entire country. We implemented a self-administered, street-based questionnaire to 7535 passersby from the general public over 18 years of age by street based random walk quota sampling (based on gender, age and socio-economic status) from 31 provinces in Iran. The sample size in each province was approximately 400, ranging from 200 to 1000. In each province 75% of sample was recruited from the capital and the remaining 25% was recruited from one of the large cities of that province through stratified sampling. The questionnaire comprised questions on demographic information as well as questions to measure the total network size of participants as well as the network size in each of seven drug use groups including Opium, Shire (combination of Opium residue and pure opium), Crystal Methamphetamine, heroin/crack (which in Iranian context is a cocaine-free drug that mostly contains heroin, codeine, morphine and caffeine with or without other drugs), Hashish, Methamphetamine/LSD/ecstasy, and injecting drugs. The estimated size for each group was adjusted for transmission and barrier ratios. The most common type of illicit drug used was opium with the prevalence of 1500 per 100,000 population followed by shire (660), crystal methamphetamine (590), hashish (470), heroin/crack (350), methamphetamine, LSD and ecstasy (300) and injecting drugs (280). All types of substances were more common among men than women. The use of opium, shire and injecting drugs was more common in individuals over 30 whereas the use of stimulants and hashish was largest among individuals between 18 and 30 years of age. It seems that younger individuals and women are more desired to use new synthetic drugs such as crystal methamphetamine. Extending the preventive programs especially in youth as like as scaling up harm reduction services would be the main priorities in prevention and control of substance use in Iran. Because of poor service coverage and high stigma in women, more targeted programs in this affected population are needed. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ozgen, Senem; Becagli, Silvia; Bernardoni, Vera; Caserini, Stefano; Caruso, Donatella; Corbella, Lorenza; Dell'Acqua, Manuela; Fermo, Paola; Gonzalez, Raquel; Lonati, Giovanni; Signorini, Stefano; Tardivo, Ruggero; Tosi, Elisa; Valli, Gianluigi; Vecchi, Roberta; Marinovich, Marina
2017-02-01
Two common types of wood (beech and fir) were burned in commercial pellet (11.1 kW) and wood (8.2 kW) stoves following a combustion cycle simulating the behavior of a real-world user. Ultrafine particulate matter (UFP, dp < 100 nm) was sampled with three parallel multistage impactors and analyzed for metals, main water soluble ions, anhydrosugars, total carbon, and PAH content. The measurement of the number concentration and size distribution was also performed by a fourth multistage impactor. UFP mass emission factors averaged to 424 mg/kgfuel for all the tested stove and wood type (fir, beech) combinations except for beech log burning in the wood stove (838 mg/kgfuel). Compositional differences were observed for pellets and wood UFP samples, where high TC levels characterize the wood log combustion and potassium salts are dominant in every pellet sample. Crucial aspects determining the UFP composition in the wood stove experiments are critical situations in terms of available oxygen (a lack or an excess of combustion air) and high temperatures. Whereas for the automatically controlled pellets stove local situations (e.g., hindered air-fuel mixing due to heaps of pellets on the burner pot) determine the emission levels and composition. Wood samples contain more potentially carcinogenic PAHs with respect to pellets samples. Some diagnostic ratios related to PAH isomers and anhydrosugars compiled from experimental UFP data in the present study and compared to literature values proposed for the emission source discrimination for atmospheric aerosol, extend the evaluation usually limited to higher particle size fractions also to UFP.
NASA Astrophysics Data System (ADS)
Madupalli, Honey; Pavan, Barbara; Tecklenburg, Mary M. J.
2017-11-01
The mineral component of bone and other biological calcifications is primarily a carbonate substituted calcium apatite. Integration of carbonate into two sites, substitution for phosphate (B-type carbonate) and substitution for hydroxide (A-type carbonate), influences the crystal properties which relate to the functional properties of bone. In the present work, a series of AB-type carbonated apatites (AB-CAp) having varying A-type and B-type carbonate weight fractions were prepared and analyzed by Fourier transform infrared spectroscopy (FTIR), powder X-ray diffraction (XRD), and carbonate analysis. A detailed characterization of A-site and B-site carbonate assignment in the FTIR ν3 region is proposed. The mass fractions of carbonate in A-site and B-site of AB-CAp correlate differently with crystal axis length and crystallite domain size. In this series of samples reduction in crystal domain size correlates only with A-type carbonate which indicates that carbonate in the A-site is more disruptive to the apatite structure than carbonate in the B-site. High temperature methods were required to produce significant A-type carbonation of apatite, indicating a higher energy barrier for the formation of A-type carbonate than for B-type carbonate. This is consistent with the dominance of B-type carbonate substitution in low temperature synthetic and biological apatites.
The effect of Nb additions on the thermal stability of melt-spun Nd2Fe14B
NASA Astrophysics Data System (ADS)
Lewis, L. H.; Gallagher, K.; Panchanathan, V.
1999-04-01
Elevated-temperature superconducting quantum interference device (SQUID) magnetometry was performed on two samples of melt-spun and optimally annealed Nd2Fe14B; one sample contained 2.3 wt % Nb and one was Nb-free. Continuous full hysteresis loops were measured with a SQUID magnetometer at T=630 K, above the Curie temperature of the 2-14-1 phase, as a function of field (1 T⩽H⩽-1 T) and time on powdered samples sealed in quartz tubes at a vacuum of 10-6 Torr. The measured hysteresis signals were deconstructed into a high-field linear paramagnetic portion and a low-field ferromagnetic signal of unclear origin. While the saturation magnetization of the ferromagnetic signal from both samples grows with time, the signal from the Nb-containing sample is always smaller. The coercivity data are consistent with a constant impurity particle size in the Nb-containing sample and an increasing impurity particle size in the Nb-free sample. The paramagnetic susceptibility signal from the Nd2Fe14B-type phase in the Nb-free sample increases with time, while that from the Nb-containing sample remains constant. It is suggested that the presence of Nb actively suppresses the thermally induced formation of poorly crystallized Fe-rich regions that apparently exist in samples of both compositions.
Estimated abundance of wild burros surveyed on Bureau of Land Management Lands in 2014
Griffin, Paul C.
2015-01-01
The Bureau of Land Management (BLM) requires accurate estimates of the numbers of wild horses (Equus ferus caballus) and burros (Equus asinus) living on the lands it manages. For over ten years, BLM in Arizona has used the simultaneous double-observer method of recording wild burros during aerial surveys and has reported population estimates for those surveys that come from two formulations of a Lincoln-Petersen type of analysis (Graham and Bell, 1989). In this report, I provide those same two types of burro population analysis for 2014 aerial survey data from six herd management areas (HMAs) in Arizona, California, Nevada, and Utah. I also provide burro population estimates based on a different form of simultaneous double-observer analysis, now in widespread use for wild horse surveys that takes into account the potential effects on detection probability of sighting covariates including group size, distance, vegetative cover, and other factors (Huggins, 1989, 1991). The true number of burros present in the six areas surveyed was not known, so population estimates made with these three types of analyses cannot be directly tested for accuracy in this report. I discuss theoretical reasons why the Huggins (1989, 1991) type of analysis should provide less biased estimates of population size than the Lincoln-Petersen analyses and why estimates from all forms of double-observer analyses are likely to be lower than the true number of animals present in the surveyed areas. I note reasons why I suggest using burro observations made at all available distances in analyses, not only those within 200 meters of the flight path. For all analytical methods, small sample sizes of observed groups can be problematic, but that sample size can be increased over time for Huggins (1989, 1991) analyses by pooling observations. I note ways by which burro population estimates could be tested for accuracy when there are radio-collared animals in the population or when there are simultaneous double-observer surveys before and after a burro gather and removal.
Phuong, Nam Ngoc; Zalouk-Vergnoux, Aurore; Poirier, Laurence; Kamari, Abderrahmane; Châtel, Amélie; Mouneyrac, Catherine; Lagarde, Fabienne
2016-04-01
The ubiquitous presence and persistency of microplastics (MPs) in aquatic environments are of particular concern since they represent an increasing threat to marine organisms and ecosystems. Great differences of concentrations and/or quantities in field samples have been observed depending on geographical location around the world. The main types reported have been polyethylene, polypropylene, and polystyrene. The presence of MPs in marine wildlife has been shown in many studies focusing on ingestion and accumulation in different tissues, whereas studies of the biological effects of MPs in the field are scarce. If the nature and abundance/concentrations of MPs have not been systematically determined in field samples, this is due to the fact that the identification of MPs from environmental samples requires mastery and execution of several steps and techniques. For this reason and due to differences in sampling techniques and sample preparation, it remains difficult to compare the published studies. Most laboratory experiments have been performed with MP concentrations of a higher order of magnitude than those found in the field. Consequently, the ingestion and associated effects observed in exposed organisms have corresponded to great contaminant stress, which does not mimic the natural environment. Medium contaminations are produced with only one type of polymer of a precise sizes and homogenous shape whereas the MPs present in the field are known to be a mix of many types, sizes and shapes of plastic. Moreover, MPs originating in marine environments can be colonized by organisms and constitute the sorption support for many organic compounds present in environment that are not easily reproducible in laboratory. Determination of the mechanical and chemical effects of MPs on organisms is still a challenging area of research. Among the potential chemical effects it is necessary to differentiate those related to polymer properties from those due to the sorption/desorption of organic compounds. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kokoris, M; Nabavi, M; Lancaster, C; Clemmens, J; Maloney, P; Capadanno, J; Gerdes, J; Battrell, C F
2005-09-01
One current challenge facing point-of-care cancer detection is that existing methods make it difficult, time consuming and too costly to (1) collect relevant cell types directly from a patient sample, such as blood and (2) rapidly assay those cell types to determine the presence or absence of a particular type of cancer. We present a proof of principle method for an integrated, sample-to-result, point-of-care detection device that employs microfluidics technology, accepted assays, and a silica membrane for total RNA purification on a disposable, credit card sized laboratory-on-card ('lab card") device in which results are obtained in minutes. Both yield and quality of on-card purified total RNA, as determined by both LightCycler and standard reverse transcriptase amplification of G6PDH and BCR-ABL transcripts, were found to be better than or equal to accepted standard purification methods.
Gülşahin, Nurçin
2016-01-01
Nematocyst types of Cassiopea andromeda were investigated. Medusae samples were taken from Güllük Bay, Muğla, Turkey. Nematocyst samples from oral arms of C. andromeda were observed on light microscope and photographed. Birhopaloid and a-isorhiza nematocyst types were found in C. andromeda. Moreover, it was seen that nematocyst sizes increased with increasing the bell diameters of the individuals. Also, the venom of the species was isolated and injected intramuscularly to Cyprinus carpio juveniles. Signs of partial paralysis, raking, and immobilized fins were observed in the juveniles consequently. Death was observed for the fishes which were 3-4 g in the range of weight. This study is a preliminary work on nematocysts and venom of C. andromeda. Further studies on neurotoxic effects of nematocyst venoms of this species should follow.
Bishara, Anthony J; Hittner, James B
2012-09-01
It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests.
Characteristics of coking coal burnout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, M.; Bailey, J.G.
An attempt was made to clarify the characteristics of coking coal burnout by the morphological analysis of char and fly ash samples. Laboratory-scale combustion testing, simulating an ignition process, was carried out for three kinds of coal (two coking coals and one non-coking coal for reference), and sampled chars were analyzed for size, shape and type by image analysis. The full combustion process was examined in industrial-scale combustion testing for the same kinds of coal. Char sampled at the burner outlet and fly ash at the furnace exit were also analyzed. The difference between the char type, swelling properties, agglomeration,more » anisotropy and carbon burnout were compared at laboratory scale and at industrial scale. As a result, it was found that coking coals produced chars with relatively thicker walls, which mainly impeded char burnout, especially for low volatile coals.« less
Modeling of debris disks in Single and Binary stars
NASA Astrophysics Data System (ADS)
García, L.; Gómez, M.
2016-10-01
Infrared space observatories such as Spitzer and Herschel have allowed the detection of likely analogs to the Kuiper Belt in single as well as binary systems. The aim of this work is to characterize debris disks in single and binary stars and to identify features shared by the disks in both types of systems, as well as possible differences. We compiled a sample of 25 single and 14 binary stars (ages > 100 Myr) with flux measurements at λ >100 μm and evidence of infrared excesses attributed to the presence of debris disks. Then, we constructed and modeled the observed spectral energy distributions (SEDs), and compared the parameters of the disks of both samples. Both types of disks are relatively free of dust in the inner region (< 3-5 AU) and extend beyond 100 AU. No significant differences in the mass and dust size distributions of both samples are found.
Magnetic and dielectric properties of lunar samples
NASA Technical Reports Server (NTRS)
Strangway, D. W.; Pearce, G. W.; Olhoeft, G. R.
1977-01-01
Dielectric properties of lunar soil and rock samples showed a systematic character when careful precautions were taken to ensure there was no moisture present during measurement. The dielectric constant (K) above 100,000 Hz was directly dependent on density according to the formula K = (1.93 + or - 0.17) to the rho power where rho is the density in g/cc. The dielectric loss tangent was only slightly dependent on density and had values less than 0.005 for typical soils and 0.005 to 0.03 for typical rocks. The loss tangent appeared to be directly related to the metallic ilmenite content. It was shown that magnetic properties of lunar samples can be used to study the distribution of metallic and ferrous iron which shows systematic variations from soil type to soil type. Other magnetic characteristics can be used to determine the distribution of grain sizes.
NASA Astrophysics Data System (ADS)
Zeraatpisheh, Mojtaba; Ayoubi, Shamsollah; Jafari, Azam; Finke, Peter
2017-05-01
The efficiency of different digital and conventional soil mapping approaches to produce categorical maps of soil types is determined by cost, sample size, accuracy and the selected taxonomic level. The efficiency of digital and conventional soil mapping approaches was examined in the semi-arid region of Borujen, central Iran. This research aimed to (i) compare two digital soil mapping approaches including Multinomial logistic regression and random forest, with the conventional soil mapping approach at four soil taxonomic levels (order, suborder, great group and subgroup levels), (ii) validate the predicted soil maps by the same validation data set to determine the best method for producing the soil maps, and (iii) select the best soil taxonomic level by different approaches at three sample sizes (100, 80, and 60 point observations), in two scenarios with and without a geomorphology map as a spatial covariate. In most predicted maps, using both digital soil mapping approaches, the best results were obtained using the combination of terrain attributes and the geomorphology map, although differences between the scenarios with and without the geomorphology map were not significant. Employing the geomorphology map increased map purity and the Kappa index, and led to a decrease in the 'noisiness' of soil maps. Multinomial logistic regression had better performance at higher taxonomic levels (order and suborder levels); however, random forest showed better performance at lower taxonomic levels (great group and subgroup levels). Multinomial logistic regression was less sensitive than random forest to a decrease in the number of training observations. The conventional soil mapping method produced a map with larger minimum polygon size because of traditional cartographic criteria used to make the geological map 1:100,000 (on which the conventional soil mapping map was largely based). Likewise, conventional soil mapping map had also a larger average polygon size that resulted in a lower level of detail. Multinomial logistic regression at the order level (map purity of 0.80), random forest at the suborder (map purity of 0.72) and great group level (map purity of 0.60), and conventional soil mapping at the subgroup level (map purity of 0.48) produced the most accurate maps in the study area. The multinomial logistic regression method was identified as the most effective approach based on a combined index of map purity, map information content, and map production cost. The combined index also showed that smaller sample size led to a preference for the order level, while a larger sample size led to a preference for the great group level.
Distribution and Phase Association of Some Major and Trace Elements in the Arabian Gulf Sediments
NASA Astrophysics Data System (ADS)
Basaham, A. S.; El-Sayed, M. A.
1998-02-01
Twenty-four sediment samples were collected from the Arabian Gulf (ROPME Sea) and analysed for their grain size distribution and carbonate contents as well as the major elements Ca, Mg, Fe and Al and macro and trace elements Mn, Sr, Ba, Zn, Cu, Cr, V, Ni and Hg. Concentration of trace elements are found comparable to previous data published for samples taken before and after the Gulf War, and reflect the natural background level. Grain size analyses, aluminium and carbonate measurements support the presence of two major sediment types: (1) a terrigenous, fine-grained and Al rich type predominating along the Iranian side; and (2) a coarse-grained and carbonate rich type predominating along the Arabian side of the Gulf. Investigation of the correlation of the elements analysed with the sediment type indicates that they could be grouped under two distinct associations: (1) carbonate association including Ca and Sr; and (2) terrigenous association comprising Al, Fe, Mg, Ba, Mn, Zn, Cu, Cr, V, Ni and Hg. Element/Al ratios calculated for the mud non-carbonate fraction indicate that the Euphrates and Tigris rivers have minor importance as sediment sources to the Gulf. Most of the elements have exceptionally high aluminium ratios in sediments containing more than 85-90% carbonate. These sediments are restricted to the southern and south-eastern part of the area where depth is shallow and temperature and salinity are high. Both biological accumulation and chemical and biochemical coprecipitation could be responsible for this anomaly.
Morphology and Structure of High-redshift Massive Galaxies in the CANDELS Fields
NASA Astrophysics Data System (ADS)
Guan-wen, Fang; Ze-sen, Lin; Xu, Kong
2018-01-01
Using the multi-band photometric data of all five CANDELS (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey) fields and the near-infrared (F125W and F160W) high-resolution images of HST WFC3 (Hubble Space Telescope Wide Field Camera 3), a quantitative study of morphology and structure of mass-selected galaxies is presented. The sample includes 8002 galaxies with a redshift 1 < z < 3 and stellar mass M*> 1010M⊙. Based on the Convolutional Neural Network (ConvNet) criteria, we classify the sample galaxies into SPHeroids (SPH), Early-Type Disks (ETD), Late-Type Disks (LTD), and IRRegulars (IRR) in different redshift bins. The findings indicate that the galaxy morphology and structure evolve with redshift up to z ∼ 3, from irregular galaxies in the high-redshift universe to the formation of the Hubble sequence dominated by disks and spheroids. For the same redshift interval, the median values of effective radii (re) of different morphological types are in a descending order: IRR, LTD, ETD, and SPH. But for the Sérsic index (n), the order is reversed (SPH, ETD, LTD, and IRR). In the meantime, the evolution of galaxy size (re) with the redshift is explored for the galaxies of different morphological types, and it is confirmed that their size will enlarge with time. However, such a phenomenon is not found in the relations between the redshift (1 < z < 3) and the mean axis ratio (b/a), as well as the Sérsic index (n).
Ahmed, Md Atique; Fong, Mun Yik; Lau, Yee Ling; Yusof, Ruhani
2016-04-26
The zoonotic malaria parasite Plasmodium knowlesi has become an emerging threat to South East Asian countries particular in Malaysia. A recent study from Sarawak (Malaysian Borneo) discovered two distinct normocyte binding protein xa (Pknbpxa) types of P. knowlesi. In the present study, the Pknbpxa of clinical isolates from Peninsular Malaysia and Sabah (Malaysian Borneo) were investigated for the presence of Pknbpxa types and natural selection force acting on the gene. Blood samples were collected from 47 clinical samples from Peninsular Malaysia (n = 35) and Sabah (Malaysian Borneo, n = 12) were used in the study. The Pknbpxa gene was successfully amplified and directly sequenced from 38 of the samples (n = 31, Peninsular Malaysia and n = 7, Sabah, Malaysian Borneo). The Pknbpxa sequences of P. knowlesi isolates from Sarawak (Malaysian Borneo) were retrieved from GenBank and included in the analysis. Polymorphism, genetic diversity and natural selection of Pknbpxa sequences were analysed using DNAsp v 5.10, MEGA5. Phylogentics of Pknbpxa sequences was analysed using MrBayes v3.2 and Splits Tree v4.13.1. The pairwise F ST indices were used to determine the genetic differentiation between the Pknbpxa types and was calculated using Arlequin 3.5.1.3. Analyses of the sequences revealed Pknbpxa dimorphism throughout Malaysia indicating co-existence of the two types (Type-1 and Type-2) of Pknbpxa. More importantly, a third type (Type 3) closely related to Type 2 Pknbpxa was also detected. This third type was found only in the isolates originating from Peninsular Malaysia. Negative natural selection was observed, suggesting functional constrains within the Pknbpxa types. This study revealed the existence of three Pknbpxa types in Malaysia. Types 1 and 2 were found not only in Malaysian Borneo (Sarawak and Sabah) but also in Peninsular Malaysia. A third type which was specific only to samples originating from Peninsular Malaysia was discovered. Further genetic studies with a larger sample size will be necessary to determine whether natural selection is driving this genetic differentiation and geographical separation.
Parallel Analysis with Unidimensional Binary Data
ERIC Educational Resources Information Center
Weng, Li-Jen; Cheng, Chung-Ping
2005-01-01
The present simulation investigated the performance of parallel analysis for unidimensional binary data. Single-factor models with 8 and 20 indicators were examined, and sample size (50, 100, 200, 500, and 1,000), factor loading (.45, .70, and .90), response ratio on two categories (50/50, 60/40, 70/30, 80/20, and 90/10), and types of correlation…
USDA-ARS?s Scientific Manuscript database
Colonies of different origins were sampled monthly to detect possible differential infection with Nosema ceranae, and colony sizes and queen status were monitored quarterly. One experiment used queens from colonies with high and low infections instrumentally inseminated with drones of the same type...
An Investigation of the Raudenbush (1988) Test for Studying Variance Heterogeneity.
ERIC Educational Resources Information Center
Harwell, Michael
1997-01-01
The meta-analytic method proposed by S. W. Raudenbush (1988) for studying variance heterogeneity was studied. Results of a Monte Carlo study indicate that the Type I error rate of the test is sensitive to even modestly platykurtic score distributions and to the ratio of study sample size to the number of studies. (SLD)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-07
... Clearance for Survey Research Studies. Revision to burden hours may be needed due to changes in the size of the target population, sampling design, and/or questionnaire length. DATES: Comments on this notice... Survey Research Studies. OMB Control Number: 0535-0248. Type of Request: To revise and extend a currently...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-19
... Clearance for Survey Research Studies. Revision to burden hours will be needed due to changes in the size of the target population, sampling design, and/or questionnaire length. DATES: Comments on this notice... Survey Research Studies. OMB Control Number: 0535-0248. Type of Request: To revise and extend a currently...
Czech Children's Drawing of Nature
ERIC Educational Resources Information Center
Yilmaz, Zuhal; Kubiatko, Milan; Topal, Hatice
2012-01-01
Do world children draw nature pictures in a certain way? Range of mountains in the background, a sun, couple clouds, a river rising from mountains. Is this type of drawing universal in the way these nature items are organized on a drawing paper? The sample size from Czech Republic included 33 participants from two kindergartens. They were 5 and 6…
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805
NASA Technical Reports Server (NTRS)
Schrader, Christian M.; Rickman, Doug; Stoeser, Douglas; Wentworth, Susan; McKay, Dave S.; Botha, Pieter; Butcher, Alan R.; Horsch, Hanna E.; Benedictus, Aukje; Gottlieb, Paul
2008-01-01
This slide presentation reviews the work to analyze the lunar highland regolith samples that came from the Apollo 16 core sample 64001/2 and simulants of lunar regolith, and build a comparative database. The work is part of a larger effort to compile an internally consistent database on lunar regolith (Apollo Samples) and lunar regolith simulants. This is in support of a future lunar outpost. The work is to characterize existing lunar regolith and simulants in terms of particle type, particle size distribution, particle shape distribution, bulk density, and other compositional characteristics, and to evaluate the regolith simulants by the same properties in comparison to the Apollo sample lunar regolith.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, S.; Aldering, G.; Antilogus, P.
The use of Type Ia supernovae as distance indicators led to the discovery of the accelerating expansion of the universe a decade ago. Now that large second generation surveys have significantly increased the size and quality of the high-redshift sample, the cosmological constraints are limited by the currently available sample of ~50 cosmologically useful nearby supernovae. The Nearby Supernova Factory addresses this problem by discovering nearby supernovae and observing their spectrophotometric time development. Our data sample includes over 2400 spectra from spectral timeseries of 185 supernovae. This talk presents results from a portion of this sample including a Hubble diagrammore » (relative distance vs. redshift) and a description of some analyses using this rich dataset.« less
Effect of bismuth substitution in strontium hexaferrite
NASA Astrophysics Data System (ADS)
Sahoo, M. R.; Kuila, S.; Sweta, K.; Barik, A.; Vishwakarma, P. N.
2018-05-01
Bismuth (Bi) substituted M-type strontium hexaferrite (Sr1-xBix Fe12O19, x=0 and 0.02) are synthesized by sol-gel auto combustion method. Powder X-ray diffraction (XRD) and field emission scanning electron microscopy (FESEM) shows increase in lattice parameter and particle size (500 nm to 3 micron) respectively, for Bi substituted sample. Magnetization via M-H shows decrease in magnetic hardness for Bi substituted samples. M-T data for parent (x=0) sample shows an antiferromagnetic transition in the ZFC plot at 495 °C. This antiferromagnetic transition is replaced by a ferromagnetic transition for FCW measurement. Similar behavior is displayed by the Bi substituted sample with transition temperature reduced to 455 °C.
Duration of surgical-orthodontic treatment.
Häll, Birgitta; Jämsä, Tapio; Soukka, Tero; Peltomäki, Timo
2008-10-01
To study the duration of surgical-orthodontic treatment with special reference to patients' age and the type of tooth movements, i.e. extraction vs. non-extraction and intrusion before or extrusion after surgery to level the curve of Spee. The material consisted files of 37 consecutive surgical-orthodontic patients. The files were reviewed and gender, diagnosis, type of malocclusion, age at the initiation of treatment, duration of treatment, type of tooth movements (extraction vs. non-extraction and levelling of the curve of Spee before or after operation) and type of operation were retrieved. For statistical analyses two sample t-test, Kruskal-Wallis and Spearman rank correlation tests were used. Mean treatment duration of the sample was 26.8 months, of which pre-surgical orthodontics took on average 17.5 months. Patients with extractions as part of the treatment had statistically and clinically significantly longer treatment duration, on average 8 months, than those without extractions. No other studied variable seemed to have an impact on the treatment time. The present small sample size prevents reliable conclusions to be made. However, the findings suggest, and patients should be informed, that extractions included in the treatment plan increase chances of longer duration of surgical-orthodontic treatment.
Effect of temperature on the magnetic properties of nano-sized M-type barium hexagonal ferrites
NASA Astrophysics Data System (ADS)
Tchouank Tekou Carol, T.; Sharma, Jyoti; Mohammed, J.; Kumar, Sachin; Srivastava, A. K.
2017-07-01
The application of M-type hexagonal ferrites in electronic devices is increasing with technological advancement. This is due to the possibility of improving the physical and magnetic properties to suit the desired application. Enhanced magnetic properties make hexagonal ferrites suitable for hyper frequency and radar absorbing application. In this paper, we investigated the effect of heat-treatment temperature on the structural and magnetic properties of M-type barium hexagonal ferrites with chemical composition Ba1-xAlxFe12-yMnyO19 (x=0.6 and y=0.3) synthesized by sol-gel auto-combustion method and sintered at 750°C, 850°C, 950°C and 1050°C. Characterisations of the prepared samples were done using Fourier transform-infrared (FT-IR), and vibrating sample magnetometer (VSM). The formation of M-type hexaferrite has been confirmed from XRD. The presence of two prominent peaks between 400 cm-1 and 600 cm-1 in the spectra of Fourier transform-infrared spectroscopy (FT-IR) also shows the formation of ferrite phase. Saturation magnetisation (MS), remnant magnetisation (Mr), coercivity (Hc) and squareness ratio (SR) were calculated from the M-H loop obtained from vibrating sample magnetometer (VSM).
Evaluation of HPV DNA positivity in colorectal cancer patients in Kerman, Southeast Iran
Malekpour Afshar, Reza; Deldar, Zeinab; Mollaei, Hamid Reza; Arabzadeh, Seyed Alimohammad; Iranpour, Maryam
2018-01-27
Background: The HPV virus is known to be oncogenic and associations with many cancers has been proven. Although many studies have been conducted on the possible relationship with colorectal cancer (CRC), a definitive role of the virus has yet to be identified. Method: In this cross-sectional study, the frequency of HPV positivity in CRC samples in Kerman was assessed in 84 cases with a mean age of 47.7 ± 12.5 years over two years. Qualitative real time PCR was performed using general primers for the L1 region of HPV DNA. Results: Out of 84 CRC samples, 19 (22.6%), proved positive for HPV DNA. Genotyping of positive samples showed all of these to be of high risk HPV type. Prevalence of HPV infection appears to depend geographic region, life style, diet and other factors. Conclusion: In our location frequency of CRC is low, and this limited the sample size for evaluation of HPV DNA. The most prevalent types were HPV types 51 and 56. While HPV infection may play an important role in colorectal carcinogenesis, this needs to be assessed in future studies. Creative Commons Attribution License
Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A
2017-06-30
Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research
Bakker, Marjan; Wicherts, Jelte M.
2014-01-01
Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606
Structural differences in enamel and dentin in human, bovine, porcine, and ovine teeth.
Ortiz-Ruiz, Antonio José; Teruel-Fernández, Juan de Dios; Alcolea-Rubio, Luis Alberto; Hernández-Fernández, Ana; Martínez-Beneyto, Yolanda; Gispert-Guirado, Francesc
2018-07-01
The aim was to study differences between crystalline nanostructures from the enamel and dentin of human, bovine, porcine, and ovine species. Dentine and enamel fragments extracted from sound human, bovine, porcine and ovine incisors and molars were mechanically ground up to a final particle size of <100μm. Samples were analyzed using X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and differential scanning calorimetry (DSC). Human enamel (HE) and dentin (HD) showed a-axis and c-axis lengths of the carbonate apatite (CAP) crystal lattice nearer to synthetic hydroxyapatite (SHA), which had the smallest size. Enamel crystal sizes were always higher than those of dentin for all species. HE and HD had the largest crystal, followed by bovine samples. Hydroxyapatites (HAs) in enamel had a higher crystallinity index (CI), CI Rietveld and CI FTIR, than the corresponding dentin of the same species. HE and HD had the highest CIs, followed by ovine enamel (OE). The changes in heat capacity that were nearest to values in human teeth during the glass transition (ΔCp) were in porcine specimens. There was a significant direct correlation between the size of the a-axis and the substitution by both type A and B carbonates. The size of the nanocrystals and the crystallinity (CI Rietveld y CI FTIR ) were significantly and negatively correlated with the proteic phase of all the substrates. There was a strongly positive correlation between the caloric capacity, the CIs and the crystal size and a strongly negative correlation between carbonates type A and B and proteins. There are differences in the organic and inorganic content of human, bovine, porcine and ovine enamels and dentins which should be taken into account when interpreting the results of studies using animal substrates as substitutes for human material. Copyright © 2018 Elsevier GmbH. All rights reserved.
Rodrigues, Renata Costa Val; Zandi, Homan; Kristoffersen, Anne Karin; Enersen, Morten; Mdala, Ibrahimu; Ørstavik, Dag; Rôças, Isabela N; Siqueira, José F
2017-07-01
This clinical study evaluated the influence of the apical preparation size using nickel-titanium rotary instrumentation and the effect of a disinfectant on bacterial reduction in root canal-treated teeth with apical periodontitis. Forty-three teeth with posttreatment apical periodontitis were selected for retreatment. Teeth were randomly divided into 2 groups according to the irrigant used (2.5% sodium hypochlorite [NaOCl], n = 22; saline, n = 21). Canals were prepared with the Twisted File Adaptive (TFA) system (SybronEndo, Orange, CA). Bacteriological samples were taken before preparation (S1), after using the first instrument (S2), and then after the third instrument of the TFA system (S3). In the saline group, an additional sample was taken after final irrigation with 1% NaOCl (S4). DNA was extracted from the clinical samples and subjected to quantitative real-time polymerase chain reaction to evaluate the levels of total bacteria and streptococci. S1 from all teeth were positive for bacteria. Preparation to the first and third instruments from the TFA system showed a highly significant intracanal bacterial reduction regardless of the irrigant (P < .01). Apical enlargement to the third instrument caused a significantly higher decrease in bacterial counts than the first instrument (P < .01). Intergroup comparison revealed no significant difference between NaOCl and saline after the first instrument (P > .05). NaOCl was significantly better than saline after using the largest instrument in the series (P < .01). Irrespective of the type of irrigant, an increase in the apical preparation size significantly enhanced root canal disinfection. The disinfecting benefit of NaOCl over saline was significant at large apical preparation sizes. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Comparison of hard tissues that are useful for DNA analysis in forensic autopsy.
Kaneko, Yu; Ohira, Hiroshi; Tsuda, Yukio; Yamada, Yoshihiro
2015-11-01
Forensic analysis of DNA from hard tissues can be important when investigating a variety of cases resulting from mass disaster or criminal cases. This study was conducted to evaluate the most suitable tissues, method and sample size for processing of hard tissues prior to DNA isolation. We also evaluated the elapsed time after death in relation to the quantity of DNA extracted. Samples of hard tissues (37 teeth, 42 skull, 42 rib, and 39 nails) from 42 individuals aged between 50 and 83 years were used. The samples were taken from remains following forensic autopsy (from 2 days to 2 years after death). To evaluate the integrity of the nuclear DNA isolated, the percentage of allele calls for short tandem repeat profiles were compared between the hard tissues. DNA typing results indicated that until 1 month after death, any of the four hard tissue samples could be used as an alternative to teeth, allowing analysis of all of the loci. However, in terms of the sampling site, collection method and sample size adjustment, the rib appeared to be the best choice in view of the ease of specimen preparation. Our data suggest that the rib could be an alternative hard tissue sample for DNA analysis of human remains. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Modulation of the age at onset in spinocerebellar ataxia by CAG tracts in various genes
Durr, Alexandra; Bauer, Peter; Figueroa, Karla P.; Ichikawa, Yaeko; Brussino, Alessandro; Forlani, Sylvie; Rakowicz, Maria; Schöls, Ludger; Mariotti, Caterina; van de Warrenburg, Bart P.C.; Orsi, Laura; Giunti, Paola; Filla, Alessandro; Szymanski, Sandra; Klockgether, Thomas; Berciano, José; Pandolfo, Massimo; Boesch, Sylvia; Melegh, Bela; Timmann, Dagmar; Mandich, Paola; Camuzat, Agnès; Goto, Jun; Ashizawa, Tetsuo; Cazeneuve, Cécile; Tsuji, Shoji; Pulst, Stefan-M.; Brusco, Alfredo; Riess, Olaf; Stevanin, Giovanni
2014-01-01
Polyglutamine-coding (CAG)n repeat expansions in seven different genes cause spinocerebellar ataxias. Although the size of the expansion is negatively correlated with age at onset, it accounts for only 50–70% of its variability. To find other factors involved in this variability, we performed a regression analysis in 1255 affected individuals with identified expansions (spinocerebellar ataxia types 1, 2, 3, 6 and 7), recruited through the European Consortium on Spinocerebellar Ataxias, to determine whether age at onset is influenced by the size of the normal allele in eight causal (CAG)n-containing genes (ATXN1–3, 6–7, 17, ATN1 and HTT). We confirmed the negative effect of the expanded allele and detected threshold effects reflected by a quadratic association between age at onset and CAG size in spinocerebellar ataxia types 1, 3 and 6. We also evidenced an interaction between the expanded and normal alleles in trans in individuals with spinocerebellar ataxia types 1, 6 and 7. Except for individuals with spinocerebellar ataxia type 1, age at onset was also influenced by other (CAG)n-containing genes: ATXN7 in spinocerebellar ataxia type 2; ATXN2, ATN1 and HTT in spinocerebellar ataxia type 3; ATXN1 and ATXN3 in spinocerebellar ataxia type 6; and ATXN3 and TBP in spinocerebellar ataxia type 7. This suggests that there are biological relationships among these genes. The results were partially replicated in four independent populations representing 460 Caucasians and 216 Asian samples; the differences are possibly explained by ethnic or geographical differences. As the variability in age at onset is not completely explained by the effects of the causative and modifier sister genes, other genetic or environmental factors must also play a role in these diseases. PMID:24972706
The genetic architecture of type 2 diabetes.
Fuchsberger, Christian; Flannick, Jason; Teslovich, Tanya M; Mahajan, Anubha; Agarwala, Vineeta; Gaulton, Kyle J; Ma, Clement; Fontanillas, Pierre; Moutsianas, Loukas; McCarthy, Davis J; Rivas, Manuel A; Perry, John R B; Sim, Xueling; Blackwell, Thomas W; Robertson, Neil R; Rayner, N William; Cingolani, Pablo; Locke, Adam E; Tajes, Juan Fernandez; Highland, Heather M; Dupuis, Josee; Chines, Peter S; Lindgren, Cecilia M; Hartl, Christopher; Jackson, Anne U; Chen, Han; Huyghe, Jeroen R; van de Bunt, Martijn; Pearson, Richard D; Kumar, Ashish; Müller-Nurasyid, Martina; Grarup, Niels; Stringham, Heather M; Gamazon, Eric R; Lee, Jaehoon; Chen, Yuhui; Scott, Robert A; Below, Jennifer E; Chen, Peng; Huang, Jinyan; Go, Min Jin; Stitzel, Michael L; Pasko, Dorota; Parker, Stephen C J; Varga, Tibor V; Green, Todd; Beer, Nicola L; Day-Williams, Aaron G; Ferreira, Teresa; Fingerlin, Tasha; Horikoshi, Momoko; Hu, Cheng; Huh, Iksoo; Ikram, Mohammad Kamran; Kim, Bong-Jo; Kim, Yongkang; Kim, Young Jin; Kwon, Min-Seok; Lee, Juyoung; Lee, Selyeong; Lin, Keng-Han; Maxwell, Taylor J; Nagai, Yoshihiko; Wang, Xu; Welch, Ryan P; Yoon, Joon; Zhang, Weihua; Barzilai, Nir; Voight, Benjamin F; Han, Bok-Ghee; Jenkinson, Christopher P; Kuulasmaa, Teemu; Kuusisto, Johanna; Manning, Alisa; Ng, Maggie C Y; Palmer, Nicholette D; Balkau, Beverley; Stančáková, Alena; Abboud, Hanna E; Boeing, Heiner; Giedraitis, Vilmantas; Prabhakaran, Dorairaj; Gottesman, Omri; Scott, James; Carey, Jason; Kwan, Phoenix; Grant, George; Smith, Joshua D; Neale, Benjamin M; Purcell, Shaun; Butterworth, Adam S; Howson, Joanna M M; Lee, Heung Man; Lu, Yingchang; Kwak, Soo-Heon; Zhao, Wei; Danesh, John; Lam, Vincent K L; Park, Kyong Soo; Saleheen, Danish; So, Wing Yee; Tam, Claudia H T; Afzal, Uzma; Aguilar, David; Arya, Rector; Aung, Tin; Chan, Edmund; Navarro, Carmen; Cheng, Ching-Yu; Palli, Domenico; Correa, Adolfo; Curran, Joanne E; Rybin, Denis; Farook, Vidya S; Fowler, Sharon P; Freedman, Barry I; Griswold, Michael; Hale, Daniel Esten; Hicks, Pamela J; Khor, Chiea-Chuen; Kumar, Satish; Lehne, Benjamin; Thuillier, Dorothée; Lim, Wei Yen; Liu, Jianjun; van der Schouw, Yvonne T; Loh, Marie; Musani, Solomon K; Puppala, Sobha; Scott, William R; Yengo, Loïc; Tan, Sian-Tsung; Taylor, Herman A; Thameem, Farook; Wilson, Gregory; Wong, Tien Yin; Njølstad, Pål Rasmus; Levy, Jonathan C; Mangino, Massimo; Bonnycastle, Lori L; Schwarzmayr, Thomas; Fadista, João; Surdulescu, Gabriela L; Herder, Christian; Groves, Christopher J; Wieland, Thomas; Bork-Jensen, Jette; Brandslund, Ivan; Christensen, Cramer; Koistinen, Heikki A; Doney, Alex S F; Kinnunen, Leena; Esko, Tõnu; Farmer, Andrew J; Hakaste, Liisa; Hodgkiss, Dylan; Kravic, Jasmina; Lyssenko, Valeriya; Hollensted, Mette; Jørgensen, Marit E; Jørgensen, Torben; Ladenvall, Claes; Justesen, Johanne Marie; Käräjämäki, Annemari; Kriebel, Jennifer; Rathmann, Wolfgang; Lannfelt, Lars; Lauritzen, Torsten; Narisu, Narisu; Linneberg, Allan; Melander, Olle; Milani, Lili; Neville, Matt; Orho-Melander, Marju; Qi, Lu; Qi, Qibin; Roden, Michael; Rolandsson, Olov; Swift, Amy; Rosengren, Anders H; Stirrups, Kathleen; Wood, Andrew R; Mihailov, Evelin; Blancher, Christine; Carneiro, Mauricio O; Maguire, Jared; Poplin, Ryan; Shakir, Khalid; Fennell, Timothy; DePristo, Mark; de Angelis, Martin Hrabé; Deloukas, Panos; Gjesing, Anette P; Jun, Goo; Nilsson, Peter; Murphy, Jacquelyn; Onofrio, Robert; Thorand, Barbara; Hansen, Torben; Meisinger, Christa; Hu, Frank B; Isomaa, Bo; Karpe, Fredrik; Liang, Liming; Peters, Annette; Huth, Cornelia; O'Rahilly, Stephen P; Palmer, Colin N A; Pedersen, Oluf; Rauramaa, Rainer; Tuomilehto, Jaakko; Salomaa, Veikko; Watanabe, Richard M; Syvänen, Ann-Christine; Bergman, Richard N; Bharadwaj, Dwaipayan; Bottinger, Erwin P; Cho, Yoon Shin; Chandak, Giriraj R; Chan, Juliana C N; Chia, Kee Seng; Daly, Mark J; Ebrahim, Shah B; Langenberg, Claudia; Elliott, Paul; Jablonski, Kathleen A; Lehman, Donna M; Jia, Weiping; Ma, Ronald C W; Pollin, Toni I; Sandhu, Manjinder; Tandon, Nikhil; Froguel, Philippe; Barroso, Inês; Teo, Yik Ying; Zeggini, Eleftheria; Loos, Ruth J F; Small, Kerrin S; Ried, Janina S; DeFronzo, Ralph A; Grallert, Harald; Glaser, Benjamin; Metspalu, Andres; Wareham, Nicholas J; Walker, Mark; Banks, Eric; Gieger, Christian; Ingelsson, Erik; Im, Hae Kyung; Illig, Thomas; Franks, Paul W; Buck, Gemma; Trakalo, Joseph; Buck, David; Prokopenko, Inga; Mägi, Reedik; Lind, Lars; Farjoun, Yossi; Owen, Katharine R; Gloyn, Anna L; Strauch, Konstantin; Tuomi, Tiinamaija; Kooner, Jaspal Singh; Lee, Jong-Young; Park, Taesung; Donnelly, Peter; Morris, Andrew D; Hattersley, Andrew T; Bowden, Donald W; Collins, Francis S; Atzmon, Gil; Chambers, John C; Spector, Timothy D; Laakso, Markku; Strom, Tim M; Bell, Graeme I; Blangero, John; Duggirala, Ravindranath; Tai, E Shyong; McVean, Gilean; Hanis, Craig L; Wilson, James G; Seielstad, Mark; Frayling, Timothy M; Meigs, James B; Cox, Nancy J; Sladek, Rob; Lander, Eric S; Gabriel, Stacey; Burtt, Noël P; Mohlke, Karen L; Meitinger, Thomas; Groop, Leif; Abecasis, Goncalo; Florez, Jose C; Scott, Laura J; Morris, Andrew P; Kang, Hyun Min; Boehnke, Michael; Altshuler, David; McCarthy, Mark I
2016-08-04
The genetic architecture of common traits, including the number, frequency, and effect sizes of inherited variants that contribute to individual risk, has been long debated. Genome-wide association studies have identified scores of common variants associated with type 2 diabetes, but in aggregate, these explain only a fraction of the heritability of this disease. Here, to test the hypothesis that lower-frequency variants explain much of the remainder, the GoT2D and T2D-GENES consortia performed whole-genome sequencing in 2,657 European individuals with and without diabetes, and exome sequencing in 12,940 individuals from five ancestry groups. To increase statistical power, we expanded the sample size via genotyping and imputation in a further 111,548 subjects. Variants associated with type 2 diabetes after sequencing were overwhelmingly common and most fell within regions previously identified by genome-wide association studies. Comprehensive enumeration of sequence variation is necessary to identify functional alleles that provide important clues to disease pathophysiology, but large-scale sequencing does not support the idea that lower-frequency variants have a major role in predisposition to type 2 diabetes.
The genetic architecture of type 2 diabetes
Ma, Clement; Fontanillas, Pierre; Moutsianas, Loukas; McCarthy, Davis J; Rivas, Manuel A; Perry, John R B; Sim, Xueling; Blackwell, Thomas W; Robertson, Neil R; Rayner, N William; Cingolani, Pablo; Locke, Adam E; Tajes, Juan Fernandez; Highland, Heather M; Dupuis, Josee; Chines, Peter S; Lindgren, Cecilia M; Hartl, Christopher; Jackson, Anne U; Chen, Han; Huyghe, Jeroen R; van de Bunt, Martijn; Pearson, Richard D; Kumar, Ashish; Müller-Nurasyid, Martina; Grarup, Niels; Stringham, Heather M; Gamazon, Eric R; Lee, Jaehoon; Chen, Yuhui; Scott, Robert A; Below, Jennifer E; Chen, Peng; Huang, Jinyan; Go, Min Jin; Stitzel, Michael L; Pasko, Dorota; Parker, Stephen C J; Varga, Tibor V; Green, Todd; Beer, Nicola L; Day-Williams, Aaron G; Ferreira, Teresa; Fingerlin, Tasha; Horikoshi, Momoko; Hu, Cheng; Huh, Iksoo; Ikram, Mohammad Kamran; Kim, Bong-Jo; Kim, Yongkang; Kim, Young Jin; Kwon, Min-Seok; Lee, Juyoung; Lee, Selyeong; Lin, Keng-Han; Maxwell, Taylor J; Nagai, Yoshihiko; Wang, Xu; Welch, Ryan P; Yoon, Joon; Zhang, Weihua; Barzilai, Nir; Voight, Benjamin F; Han, Bok-Ghee; Jenkinson, Christopher P; Kuulasmaa, Teemu; Kuusisto, Johanna; Manning, Alisa; Ng, Maggie C Y; Palmer, Nicholette D; Balkau, Beverley; Stančáková, Alena; Abboud, Hanna E; Boeing, Heiner; Giedraitis, Vilmantas; Prabhakaran, Dorairaj; Gottesman, Omri; Scott, James; Carey, Jason; Kwan, Phoenix; Grant, George; Smith, Joshua D; Neale, Benjamin M; Purcell, Shaun; Butterworth, Adam S; Howson, Joanna M M; Lee, Heung Man; Lu, Yingchang; Kwak, Soo-Heon; Zhao, Wei; Danesh, John; Lam, Vincent K L; Park, Kyong Soo; Saleheen, Danish; So, Wing Yee; Tam, Claudia H T; Afzal, Uzma; Aguilar, David; Arya, Rector; Aung, Tin; Chan, Edmund; Navarro, Carmen; Cheng, Ching-Yu; Palli, Domenico; Correa, Adolfo; Curran, Joanne E; Rybin, Denis; Farook, Vidya S; Fowler, Sharon P; Freedman, Barry I; Griswold, Michael; Hale, Daniel Esten; Hicks, Pamela J; Khor, Chiea-Chuen; Kumar, Satish; Lehne, Benjamin; Thuillier, Dorothée; Lim, Wei Yen; Liu, Jianjun; van der Schouw, Yvonne T; Loh, Marie; Musani, Solomon K; Puppala, Sobha; Scott, William R; Yengo, Loïc; Tan, Sian-Tsung; Taylor, Herman A; Thameem, Farook; Wilson, Gregory; Wong, Tien Yin; Njølstad, Pål Rasmus; Levy, Jonathan C; Mangino, Massimo; Bonnycastle, Lori L; Schwarzmayr, Thomas; Fadista, João; Surdulescu, Gabriela L; Herder, Christian; Groves, Christopher J; Wieland, Thomas; Bork-Jensen, Jette; Brandslund, Ivan; Christensen, Cramer; Koistinen, Heikki A; Doney, Alex S F; Kinnunen, Leena; Esko, Tõnu; Farmer, Andrew J; Hakaste, Liisa; Hodgkiss, Dylan; Kravic, Jasmina; Lyssenko, Valeriya; Hollensted, Mette; Jørgensen, Marit E; Jørgensen, Torben; Ladenvall, Claes; Justesen, Johanne Marie; Käräjämäki, Annemari; Kriebel, Jennifer; Rathmann, Wolfgang; Lannfelt, Lars; Lauritzen, Torsten; Narisu, Narisu; Linneberg, Allan; Melander, Olle; Milani, Lili; Neville, Matt; Orho-Melander, Marju; Qi, Lu; Qi, Qibin; Roden, Michael; Rolandsson, Olov; Swift, Amy; Rosengren, Anders H; Stirrups, Kathleen; Wood, Andrew R; Mihailov, Evelin; Blancher, Christine; Carneiro, Mauricio O; Maguire, Jared; Poplin, Ryan; Shakir, Khalid; Fennell, Timothy; DePristo, Mark; de Angelis, Martin Hrabé; Deloukas, Panos; Gjesing, Anette P; Jun, Goo; Nilsson, Peter; Murphy, Jacquelyn; Onofrio, Robert; Thorand, Barbara; Hansen, Torben; Meisinger, Christa; Hu, Frank B; Isomaa, Bo; Karpe, Fredrik; Liang, Liming; Peters, Annette; Huth, Cornelia; O'Rahilly, Stephen P; Palmer, Colin N A; Pedersen, Oluf; Rauramaa, Rainer; Tuomilehto, Jaakko; Salomaa, Veikko; Watanabe, Richard M; Syvänen, Ann-Christine; Bergman, Richard N; Bharadwaj, Dwaipayan; Bottinger, Erwin P; Cho, Yoon Shin; Chandak, Giriraj R; Chan, Juliana C N; Chia, Kee Seng; Daly, Mark J; Ebrahim, Shah B; Langenberg, Claudia; Elliott, Paul; Jablonski, Kathleen A; Lehman, Donna M; Jia, Weiping; Ma, Ronald C W; Pollin, Toni I; Sandhu, Manjinder; Tandon, Nikhil; Froguel, Philippe; Barroso, Inês; Teo, Yik Ying; Zeggini, Eleftheria; Loos, Ruth J F; Small, Kerrin S; Ried, Janina S; DeFronzo, Ralph A; Grallert, Harald; Glaser, Benjamin; Metspalu, Andres; Wareham, Nicholas J; Walker, Mark; Banks, Eric; Gieger, Christian; Ingelsson, Erik; Im, Hae Kyung; Illig, Thomas; Franks, Paul W; Buck, Gemma; Trakalo, Joseph; Buck, David; Prokopenko, Inga; Mägi, Reedik; Lind, Lars; Farjoun, Yossi; Owen, Katharine R; Gloyn, Anna L; Strauch, Konstantin; Tuomi, Tiinamaija; Kooner, Jaspal Singh; Lee, Jong-Young; Park, Taesung; Donnelly, Peter; Morris, Andrew D; Hattersley, Andrew T; Bowden, Donald W; Collins, Francis S; Atzmon, Gil; Chambers, John C; Spector, Timothy D; Laakso, Markku; Strom, Tim M; Bell, Graeme I; Blangero, John; Duggirala, Ravindranath; Tai, E Shyong; McVean, Gilean; Hanis, Craig L; Wilson, James G; Seielstad, Mark; Frayling, Timothy M; Meigs, James B; Cox, Nancy J; Sladek, Rob; Lander, Eric S; Gabriel, Stacey; Burtt, Noël P; Mohlke, Karen L; Meitinger, Thomas; Groop, Leif; Abecasis, Goncalo; Florez, Jose C; Scott, Laura J; Morris, Andrew P; Kang, Hyun Min; Boehnke, Michael; Altshuler, David; McCarthy, Mark I
2016-01-01
The genetic architecture of common traits, including the number, frequency, and effect sizes of inherited variants that contribute to individual risk, has been long debated. Genome-wide association studies have identified scores of common variants associated with type 2 diabetes, but in aggregate, these explain only a fraction of heritability. To test the hypothesis that lower-frequency variants explain much of the remainder, the GoT2D and T2D-GENES consortia performed whole genome sequencing in 2,657 Europeans with and without diabetes, and exome sequencing in a total of 12,940 subjects from five ancestral groups. To increase statistical power, we expanded sample size via genotyping and imputation in a further 111,548 subjects. Variants associated with type 2 diabetes after sequencing were overwhelmingly common and most fell within regions previously identified by genome-wide association studies. Comprehensive enumeration of sequence variation is necessary to identify functional alleles that provide important clues to disease pathophysiology, but large-scale sequencing does not support a major role for lower-frequency variants in predisposition to type 2 diabetes. PMID:27398621
Effect of size on structural, optical and magnetic properties of SnO2 nanoparticles
NASA Astrophysics Data System (ADS)
Thamarai Selvi, E.; Meenakshi Sundar, S.
2017-07-01
Tin Oxide (SnO2) nanostructures were synthesized by a microwave oven assisted solvothermal method using with and without cetyl trimethyl ammonium bromide (CTAB) capping agent. XRD confirmed the pure rutile-type tetragonal phase of SnO2 for both uncapped and capped samples. The presence of functional groups was analyzed by Fourier transform infrared spectroscopy. Scanning electron microscopy shows the morphology of the samples. Transmission electron microscopy images exposed the size of the SnO2 nanostructures. Surface defect-related g factor of SnO2 nanoparticles using fluorescence spectroscopy is shown. For both uncapped and capped samples, UV-visible spectrum shows a blue shift in absorption edge due to the quantum confinement effect. Defect-related bands were identified by electron paramagnetic resonance (EPR) spectroscopy. The magnetic properties were studied by using vibrating sample magnetometer (VSM). A high value of magnetic moment 0.023 emu g-1 at room temperature for uncapped SnO2 nanoparticles was observed. Capping with CTAB enhanced the saturation magnetic moment to high value of 0.081 emu g-1 by altering the electronic configuration on the surface.
Insights in groundwater organic matter from Liquid Chromatography-Organic Carbon Detection
NASA Astrophysics Data System (ADS)
Rutlidge, H.; Oudone, P.; McDonough, L.; Andersen, M. S.; Baker, A.; Meredith, K.; O'Carroll, D. M.
2017-12-01
Understanding the processes that control the concentration and characteristics of organic matter in groundwater has important implications for the terrestrial global carbon budget. Liquid Chromatography - Organic Carbon Detection (LC-OCD) is a size-exclusion based chromatography technique that separates the organic carbon into molecular weight size fractions of biopolymers, humic substances, building blocks (degradation products of humic substances), low molecular weight acids and low molecular weight neutrals. Groundwater and surface water samples were collected from a range of locations in Australia representing different surface soil, land cover, recharge type and hydrological properties. At one site hyporheic zone samples were also collected from beneath a stream. The results showed a general decrease in the aromaticity and molecular weight indices going from surface water, hyporheic downwelling and groundwater samples. The aquifer substrate also affected the organic composition. For example, groundwater samples collected from a zone of fractured rock showed a relative decrease in the proportion of humic substances, suggestive of sorption or degradation of humic substances. This work demonstrates the potential for using LC-OCD in elucidating the processes that control the concentration and characteristics of organic matter in groundwater.
Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai; Ahrens, Richard C
2018-04-01
Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3-by-1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration-recommended 3-by-1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3-by-1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90-μg test dose and a 720-μg reference dose (42% cost reduction). Combining a 180-μg test dose and a 720-μg reference dose produced an estimated 36% cost reduction. © 2017, The Authors. The Journal of Clinical Pharmacology published by Wiley Periodicals, Inc. on behalf of American College of Clinical Pharmacology.
Zhang, Gang; Liang, Zhaohui; Yin, Jian; Fu, Wenbin; Li, Guo-Zheng
2013-01-01
Chronic neck pain is a common morbid disorder in modern society. Acupuncture has been administered for treating chronic pain as an alternative therapy for a long time, with its effectiveness supported by the latest clinical evidence. However, the potential effective difference in different syndrome types is questioned due to the limits of sample size and statistical methods. We applied machine learning methods in an attempt to solve this problem. Through a multi-objective sorting of subjective measurements, outstanding samples are selected to form the base of our kernel-oriented model. With calculation of similarities between the concerned sample and base samples, we are able to make full use of information contained in the known samples, which is especially effective in the case of a small sample set. To tackle the parameters selection problem in similarity learning, we propose an ensemble version of slightly different parameter setting to obtain stronger learning. The experimental result on a real data set shows that compared to some previous well-known methods, the proposed algorithm is capable of discovering the underlying difference among different syndrome types and is feasible for predicting the effective tendency in clinical trials of large samples.
Dynamic properties of cluster glass in La0.25Ca0.75MnO3 nanoparticles
NASA Astrophysics Data System (ADS)
Huang, X. H.; Ding, J. F.; Jiang, Z. L.; Yin, Y. W.; Yu, Q. X.; Li, X. G.
2009-10-01
The dynamic magnetic properties of cluster glass in La0.25Ca0.75MnO3 nanoparticles with average particle size range from 40 to 1000 nm have been investigated by measuring the frequency and dc magnetic field (H) dependencies of the ac susceptibility. The frequency-dependent Tf, the freezing temperature of the ferromagnetic clusters determined by the peak in the real part of the ac susceptibility χ' versus T curve with H =0, is fit to a power law. The relaxation time constant τ0 decreases as the particle size increases from 40 to 350 nm, which indicates the decrease in the size of the clusters at the surface of the nanoparticle. The relationship between H and Tf(H) deviates from the De Almeida-Thouless-type phase boundary at relatively high fields for the samples with size range from 40 to 350 nm. Moreover, for the samples with particle sizes of 40 and 100 nm, τ0 increases with increasing H, which indicates the increasing cluster size and may be ascribed to the competition between the influence of H and the local anisotropy field in the shell spins. All these results may give rise to a new insight into the behaviors of the cluster glass state in the nanosized antiferromagnetic charge-ordered perovskite manganites.
Millimeter-Wave Absorption as a Quality Control Tool for M-Type Hexaferrite Nanopowders
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCloy, John S.; Korolev, Konstantin A.; Crum, Jarrod V.
2013-01-01
Millimeter wave (MMW) absorption measurements have been conducted on commercial samples of large (micrometer-sized) and small (nanometer-sized) particles of BaFe12O19 and SrFe12O19 using a quasi-optical MMW spectrometer and a series of backwards wave oscillators encompassing the 30-120 GHz range. Effective anisotropy of the particles calculated from the resonant absorption frequency indicates lower overall anisotropy in the nano-particles. Due to their high magnetocrystalline anisotropy, both BaFe12O19 and SrFe12O19 are expected to have spin resonances in the 45-55 GHz range. Several of the sampled BaFe12O19 powders did not have MMW absorptions, so they were further investigated by DC magnetization and x-ray diffractionmore » to assess magnetic behavior and structure. The samples with absent MMW absorption contained primarily iron oxides, suggesting that MMW absorption could be used for quality control in hexaferrite powder manufacture.« less
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Evaluation of blast furnace slag as basal media for eelgrass bed.
Hizon-Fradejas, Amelia B; Nakano, Yoichi; Nakai, Satoshi; Nishijima, Wataru; Okada, Mitsumasa
2009-07-30
Two types of blast furnace slag (BFS), granulated (GS) and air-cooled slag (ACS), were evaluated as basal media for eelgrass bed. Evaluation was done by comparing BFS samples with natural eelgrass sediment (NES) in terms of some physico-chemical characteristics and then, investigating growth of eelgrass both in BFS and NES. In terms of particle size, both BFS samples were within the range acceptable for growing eelgrass. However, compared with NES, low silt-clay content for ACS and lack of organic matter content for both BFS samples were found. Growth experiment showed that eelgrass can grow in both types of BFS, although growth rates in BFS samples shown by leaf elongation were slower than that in NES. The possible reasons for stunted growth in BFS were assumed to be lack of organic matter and release of some possible toxins from BFS. Reduction of sulfide content of BFS samples did not result to enhanced growth; though sulfide release was eliminated, release of Zn was greater than before treatment and concentration of that reached to alarming amounts.
Metabolic profiling of Arabidopsis thaliana epidermal cells
Ebert, Berit; Zöller, Daniela; Erban, Alexander; Fehrle, Ines; Hartmann, Jürgen; Niehl, Annette; Kopka, Joachim; Fisahn, Joachim
2010-01-01
Metabolic phenotyping at cellular resolution may be considered one of the challenges in current plant physiology. A method is described which enables the cell type-specific metabolic analysis of epidermal cell types in Arabidopsis thaliana pavement, basal, and trichome cells. To achieve the required high spatial resolution, single cell sampling using microcapillaries was combined with routine gas chromatography-time of flight-mass spectrometry (GC-TOF-MS) based metabolite profiling. The identification and relative quantification of 117 mostly primary metabolites has been demonstrated. The majority, namely 90 compounds, were accessible without analytical background correction. Analyses were performed using cell type-specific pools of 200 microsampled individual cells. Moreover, among these identified metabolites, 38 exhibited differential pool sizes in trichomes, basal or pavement cells. The application of an independent component analysis confirmed the cell type-specific metabolic phenotypes. Significant pool size changes between individual cells were detectable within several classes of metabolites, namely amino acids, fatty acids and alcohols, alkanes, lipids, N-compounds, organic acids and polyhydroxy acids, polyols, sugars, sugar conjugates and phenylpropanoids. It is demonstrated here that the combination of microsampling and GC-MS based metabolite profiling provides a method to investigate the cellular metabolism of fully differentiated plant cell types in vivo. PMID:20150518
An ergonomic evaluation comparing desktop, notebook, and subnotebook computers.
Szeto, Grace P; Lee, Raymond
2002-04-01
To evaluate and compare the postures and movements of the cervical and upper thoracic spine, the typing performance, and workstation ergonomic factors when using a desktop, notebook, and subnotebook computers. Repeated-measures design. A motion analysis laboratory with an electromagnetic tracking device. A convenience sample of 21 university students between ages 20 and 24 years with no history of neck or shoulder discomfort. Each subject performed a standardized typing task by using each of the 3 computers. Measurements during the typing task were taken at set intervals. Cervical and thoracic spines adopted a more flexed posture in using the smaller-sized computers. There were significantly greater neck movements in using desktop computers when compared with the notebook and subnotebook computers. The viewing distances adopted by the subjects decreased as the computer size decreased. Typing performance and subjective rating of difficulty in using the keyboards were also significantly different among the 3 types of computers. Computer users need to consider the posture of the spine and potential risk of developing musculoskeletal discomfort in choosing computers. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation
NASA Astrophysics Data System (ADS)
Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.
1993-05-01
Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.
NASA Technical Reports Server (NTRS)
Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.
1993-01-01
Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.
Re-use of pilot data and interim analysis of pivotal data in MRMC studies: a simulation study
NASA Astrophysics Data System (ADS)
Chen, Weijie; Samuelson, Frank; Sahiner, Berkman; Petrick, Nicholas
2017-03-01
Novel medical imaging devices are often evaluated with multi-reader multi-case (MRMC) studies in which radiologists read images of patient cases for a specified clinical task (e.g., cancer detection). A pilot study is often used to measure the effect size and variance parameters that are necessary for sizing a pivotal study (including sizing readers, non-diseased and diseased cases). Due to the practical difficulty of collecting patient cases or recruiting clinical readers, some investigators attempt to include the pilot data as part of their pivotal study. In other situations, some investigators attempt to perform an interim analysis of their pivotal study data based upon which the sample sizes may be re-estimated. Re-use of the pilot data or interim analyses of the pivotal data may inflate the type I error of the pivotal study. In this work, we use the Roe and Metz model to simulate MRMC data under the null hypothesis (i.e., two devices have equal diagnostic performance) and investigate the type I error rate for several practical designs involving re-use of pilot data or interim analysis of pivotal data. Our preliminary simulation results indicate that, under the simulation conditions we investigated, the inflation of type I error is none or only marginal for some design strategies (e.g., re-use of patient data without re-using readers, and size re-estimation without using the effect-size estimated in the interim analysis). Upon further verifications, these are potentially useful design methods in that they may help make a study less burdensome and have a better chance to succeed without substantial loss of the statistical rigor.
Chemical Characterization of an Envelope A Sample from Hanford Tank 241-AN-103
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, M.S.
2000-08-23
A whole tank composite sample from Hanford waste tank 241-AN-103 was received at the Savannah River Technology Center (SRTC) and chemically characterized. Prior to characterization the sample was diluted to {approximately}5 M sodium concentration. The filtered supernatant liquid, the total dried solids of the diluted sample, and the washed insoluble solids obtained from filtration of the diluted sample were analyzed. A mass balance calculation of the three fractions of the sample analyzed indicate the analytical results appear relatively self-consistent for major components of the sample. However, some inconsistency was observed between results where more than one method of determination wasmore » employed and for species present in low concentrations. A direct comparison to previous analyses of material from tank 241-AN-103 was not possible due to unavailability of data for diluted samples of tank 241-AN-103 whole tank composites. However, the analytical data for other types of samples from 241-AN-103 we re mathematically diluted and compare reasonably with the current results. Although the segments of the core samples used to prepare the sample received at SRTC were combined in an attempt to produce a whole tank composite, determination of how well the results of the current analysis represent the actual composition of the Hanford waste tank 241-AN-103 remains problematic due to the small sample size and the large size of the non-homogenized waste tank.« less
Measuring firm size distribution with semi-nonparametric densities
NASA Astrophysics Data System (ADS)
Cortés, Lina M.; Mora-Valencia, Andrés; Perote, Javier
2017-11-01
In this article, we propose a new methodology based on a (log) semi-nonparametric (log-SNP) distribution that nests the lognormal and enables better fits in the upper tail of the distribution through the introduction of new parameters. We test the performance of the lognormal and log-SNP distributions capturing firm size, measured through a sample of US firms in 2004-2015. Taking different levels of aggregation by type of economic activity, our study shows that the log-SNP provides a better fit of the firm size distribution. We also formally introduce the multivariate log-SNP distribution, which encompasses the multivariate lognormal, to analyze the estimation of the joint distribution of the value of the firm's assets and sales. The results suggest that sales are a better firm size measure, as indicated by other studies in the literature.
Improving size estimates of open animal populations by incorporating information on age
Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.
2003-01-01
Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.
Methodological reporting of randomized clinical trials in respiratory research in 2010.
Lu, Yi; Yao, Qiuju; Gu, Jie; Shen, Ce
2013-09-01
Although randomized controlled trials (RCTs) are considered the highest level of evidence, they are also subject to bias, due to a lack of adequately reported randomization, and therefore the reporting should be as explicit as possible for readers to determine the significance of the contents. We evaluated the methodological quality of RCTs in respiratory research in high ranking clinical journals, published in 2010. We assessed the methodological quality, including generation of the allocation sequence, allocation concealment, double-blinding, sample-size calculation, intention-to-treat analysis, flow diagrams, number of medical centers involved, diseases, funding sources, types of interventions, trial registration, number of times the papers have been cited, journal impact factor, journal type, and journal endorsement of the CONSORT (Consolidated Standards of Reporting Trials) rules, in RCTs published in 12 top ranking clinical respiratory journals and 5 top ranking general medical journals. We included 176 trials, of which 93 (53%) reported adequate generation of the allocation sequence, 66 (38%) reported adequate allocation concealment, 79 (45%) were double-blind, 123 (70%) reported adequate sample-size calculation, 88 (50%) reported intention-to-treat analysis, and 122 (69%) included a flow diagram. Multivariate logistic regression analysis revealed that journal impact factor ≥ 5 was the only variable that significantly influenced adequate allocation sequence generation. Trial registration and journal impact factor ≥ 5 significantly influenced adequate allocation concealment. Medical interventions, trial registration, and journal endorsement of the CONSORT statement influenced adequate double-blinding. Publication in one of the general medical journal influenced adequate sample-size calculation. The methodological quality of RCTs in respiratory research needs improvement. Stricter enforcement of the CONSORT statement should enhance the quality of RCTs.
NASA Astrophysics Data System (ADS)
Brisset, Julie; Colwell, Joshua; Dove, Adrienne; Maukonen, Doug
2017-07-01
In an effort to better understand the early stages of planet formation, we have developed a 1.5U payload that flew on the International Space Station (ISS) in the NanoRacks NanoLab facility between September 2014 and March 2016. This payload, named NanoRocks, ran a particle collision experiment under long-term microgravity conditions. The objectives of the experiment were (a) to observe collisions between mm-sized particles at relative velocities of < 1 cm/s and (b) to study the formation and disruption of particle clusters for different particle types and collision velocities. Four types of particles were used: mm-sized acrylic, glass, and copper beads and 0.75 mm-sized JSC-1 lunar regolith simulant grains. The particles were placed in sample cells carved out of an aluminum tray. This tray was attached to one side of the payload casing with three springs. Every 60 s, the tray was agitated, and the resulting collisions between the particles in the sample cells were recorded by the experiment camera. During the 18 months the payload stayed on ISS, we obtained 158 videos, thus recording a great number of collisions. The average particle velocities in the sample cells after each shaking event were around 1 cm/s. After shaking stopped, the inter-particle collisions damped the particle kinetic energy in less than 20 s, reducing the average particle velocity to below 1 mm/s, and eventually slowing them to below our detection threshold. As the particle velocity decreased, we observed the transition from bouncing to sticking collisions. We recorded the formation of particle clusters at the end of each experiment run. This paper describes the design and performance of the NanoRocks ISS payload.
Brisset, Julie; Colwell, Joshua; Dove, Adrienne; Maukonen, Doug
2017-07-01
In an effort to better understand the early stages of planet formation, we have developed a 1.5U payload that flew on the International Space Station (ISS) in the NanoRacks NanoLab facility between September 2014 and March 2016. This payload, named NanoRocks, ran a particle collision experiment under long-term microgravity conditions. The objectives of the experiment were (a) to observe collisions between mm-sized particles at relative velocities of < 1 cm/s and (b) to study the formation and disruption of particle clusters for different particle types and collision velocities. Four types of particles were used: mm-sized acrylic, glass, and copper beads and 0.75 mm-sized JSC-1 lunar regolith simulant grains. The particles were placed in sample cells carved out of an aluminum tray. This tray was attached to one side of the payload casing with three springs. Every 60 s, the tray was agitated, and the resulting collisions between the particles in the sample cells were recorded by the experiment camera. During the 18 months the payload stayed on ISS, we obtained 158 videos, thus recording a great number of collisions. The average particle velocities in the sample cells after each shaking event were around 1 cm/s. After shaking stopped, the inter-particle collisions damped the particle kinetic energy in less than 20 s, reducing the average particle velocity to below 1 mm/s, and eventually slowing them to below our detection threshold. As the particle velocity decreased, we observed the transition from bouncing to sticking collisions. We recorded the formation of particle clusters at the end of each experiment run. This paper describes the design and performance of the NanoRocks ISS payload.
NASA Astrophysics Data System (ADS)
Healy, David A.; O'Connor, David J.; Burke, Aoife M.; Sodeau, John R.
2012-12-01
A Bioaerosol sensing instrument referred to as WIBS-4, designed to continuously monitor ambient bioaerosols on-line, has been used to record a multiparameter “signature” from each of a number of Primary Biological Aerosol Particulate (PBAP) samples found in air. These signatures were obtained in a controlled laboratory environment and are based on the size, asymmetry (“shape”) and auto-fluorescence of the particles. Fifteen samples from two separate taxonomic ranks (kingdoms), Plantae (×8) and Fungi (×7) were individually introduced to the WIBS-4 for measurement along with two non-fluorescing chemical solids, common salt and chalk. Over 2000 individual-particle measurements were recorded for each sample type and the ability of the WIBS spectroscopic technique to distinguish between chemicals, pollen and fungal spore material was examined by identifying individual PBAP signatures. The results obtained show that WIBS-4 could potentially be a very useful analytical tool for distinguishing between natural airborne PBAP samples, such as the fungal spores and may potentially play an important role in detecting and discriminating the toxic fungal spore, Aspergillus fumigatus, from others in real-time. If the sizing range of the commercial instrument was customarily increased and permitted to operate simultaneously in its two sizing ranges, pollen and spores could potentially be discriminated between. The data also suggest that the gain setting sensitivity on the detector would also have to be reduced by a factor >5, to routinely detect, in-range fluorescence measurements for pollen samples.
NASA Astrophysics Data System (ADS)
Foote, L. C.; Scheu, B.; kennedy, B.; Gravley, D.; Dingwell, D. B.
2011-12-01
Phreatic and hydrothermal eruptions, the most common on earth, frequently lead to magmatic eruptions. They often occur with little or no warning, representing a significant hazard. These eruptions occur over a range of temperature and pressure, and within widely differing rock types. Additionally, these eruptions may be triggered by earthquakes or landslides . Regardless of the trigger, they occur when hydrothermal/supercritical fluid rapidly flashes to steam due either to a heating or a decompression. Despite the frequency of these eruptions, previous studies have largely been focused exclusively on either the physical characteristics of the eruptions or experimental modelling of the trigger processes, with very few combining the two. Here, a new experimental procedure has been developed to model phreatic fragmentation based on the shock-tube experiments of magmatic fragmentation introduced by Alidibirov & Dingwell (1996). This technique uses water-saturated samples, producing fragmentation from a combination of argon gas overpressure and steam flashing, within the vesicles. By integrating measurements of the physical characteristics such as porosity, permeability and mineralogy in the analysis of the results of these experiments a model of phreatic fragmentation is proposed, to aid in future hazard modelling. The phreatic explosion crater forming Lake Okaro, within the Taupo Volcanic Zone of New Zealand was used as a case study. The eruption was triggered within the Rangitaiki Ignimbrite, which served as the sample material for these experiments. In order to evaluate the effects of alteration, both original, unaltered material and hydrothermally altered samples were analysed. As fragmentation is driven by gas overpressure/steam expansion within vesicles, porosity plays a critical role. For these samples average porosity values are 24 and 40% respectively. Experimental conditions were chosen primarily to reflect the conditions of the study location but also to study the effect of water saturation on the fragmentation behavior. Thus experiments were run at both room temperature and 300°C, and from 4 to 15 MPa. Pressure sensors were used to record the speed of fragmentation and fragments were recovered in order to determine grain-size distributions. First analyses of the fragmentation speed reveal no significant difference between dry and saturated samples; (14 - 42 m/s depending on applied energy). In contrast, the results of the grain size analysis show a clear shift to smaller grain sizes with saturated samples (independent of pressure or sample type) possibly reflecting the more efficient conversion of energy involved in phreatic eruptions most likely in combination with a strength reduction of the samples due to water weakening effects. We provide herewith a first parameterisation of conditions for phreatic and hydrothermal eruptions and offer an explanation for the reduction in grain size associated with phreatic eruptions.
Scalability of transport parameters with pore sizes in isodense disordered media
NASA Astrophysics Data System (ADS)
Reginald, S. William; Schmitt, V.; Vallée, R. A. L.
2014-09-01
We study light multiple scattering in complex disordered porous materials. High internal phase emulsion-based isodense polystyrene foams are designed. Two types of samples, exhibiting different pore size distributions, are investigated for different slab thicknesses varying from L = 1 \\text{mm} to 10 \\text{mm} . Optical measurements combining steady-state and time-resolved detection are used to characterize the photon transport parameters. Very interestingly, a clear scalability of the transport mean free path \\ellt with the average size of the pores S is observed, featuring a constant velocity of the transport energy in these isodense structures. This study strongly motivates further investigations into the limits of validity of this scalability as the scattering strength of the system increases.
Identification of missing variants by combining multiple analytic pipelines.
Ren, Yingxue; Reddy, Joseph S; Pottier, Cyril; Sarangi, Vivekananda; Tian, Shulan; Sinnwell, Jason P; McDonnell, Shannon K; Biernacka, Joanna M; Carrasquillo, Minerva M; Ross, Owen A; Ertekin-Taner, Nilüfer; Rademakers, Rosa; Hudson, Matthew; Mainzer, Liudmila Sergeevna; Asmann, Yan W
2018-04-16
After decades of identifying risk factors using array-based genome-wide association studies (GWAS), genetic research of complex diseases has shifted to sequencing-based rare variants discovery. This requires large sample sizes for statistical power and has brought up questions about whether the current variant calling practices are adequate for large cohorts. It is well-known that there are discrepancies between variants called by different pipelines, and that using a single pipeline always misses true variants exclusively identifiable by other pipelines. Nonetheless, it is common practice today to call variants by one pipeline due to computational cost and assume that false negative calls are a small percent of total. We analyzed 10,000 exomes from the Alzheimer's Disease Sequencing Project (ADSP) using multiple analytic pipelines consisting of different read aligners and variant calling strategies. We compared variants identified by using two aligners in 50,100, 200, 500, 1000, and 1952 samples; and compared variants identified by adding single-sample genotyping to the default multi-sample joint genotyping in 50,100, 500, 2000, 5000 and 10,000 samples. We found that using a single pipeline missed increasing numbers of high-quality variants correlated with sample sizes. By combining two read aligners and two variant calling strategies, we rescued 30% of pass-QC variants at sample size of 2000, and 56% at 10,000 samples. The rescued variants had higher proportions of low frequency (minor allele frequency [MAF] 1-5%) and rare (MAF < 1%) variants, which are the very type of variants of interest. In 660 Alzheimer's disease cases with earlier onset ages of ≤65, 4 out of 13 (31%) previously-published rare pathogenic and protective mutations in APP, PSEN1, and PSEN2 genes were undetected by the default one-pipeline approach but recovered by the multi-pipeline approach. Identification of the complete variant set from sequencing data is the prerequisite of genetic association analyses. The current analytic practice of calling genetic variants from sequencing data using a single bioinformatics pipeline is no longer adequate with the increasingly large projects. The number and percentage of quality variants that passed quality filters but are missed by the one-pipeline approach rapidly increased with sample size.
Giant Planets around FGK Stars Probably Form through Core Accretion
NASA Astrophysics Data System (ADS)
Wang, Wei; Wang, Liang; Li, Xiang; Chen, Yuqin; Zhao, Gang
2018-06-01
We present a statistical study of the planet–metallicity (P–M) correlation by comparing the 744 stars with candidate planets (SWPs) in the Kepler field that have been observed with LAMOST, and a sample of distance-independent, fake “twin” stars in the Kepler field with no planet reported (CKSNPs) yet. With well-defined and carefully selected large samples, we find for the first time a turnoff P–M correlation of Δ[Fe/H]SWPs–SNPs, which on average increases from ∼0.00 ± 0.03 dex to 0.06 ± 0.03 dex, and to 0.12 ± 0.03 for stars with Earth-, Neptune-, and Jupiter-sized planets successively, and then declines to ∼‑0.01 ± 0.03 dex for more massive planets or brown dwarfs. Moreover, the percentage of those systems with positive Δ[Fe/H] has the same turnoff pattern. We also find that FG-type stars follow this general trend, but K-type stars are different. Moderate metal enhancement (∼0.1–0.2 dex) for K-type stars with planets of radii between 2 and 4 R ⊕, compared to CKSNPs is observed, which indicates much higher metallicities are required for Super-Earths and Neptune-sized planets to form around K-type stars. We point out that the P–M correlation is actually metallicity-dependent, i.e., the correlation is positive at solar and supersolar metallicities, and negative at subsolar metallicities. No steady increase of Δ[Fe/H] against planet sizes is observed for rocky planets, excluding the pollution scenario as a major mechanism for the P–M correlation. All these clues suggest that giant planets probably form differently from rocky planets or more massive planets/brown dwarfs, and the core accretion scenario is highly favored, and high metallicity is a prerequisite for massive planets to form.
Kuesap, Jiraporn; Na-Bangchang, Kesara
2018-04-01
Malaria is one of the most important public health problems in tropical areas on the globe. Several factors are associated with susceptibility to malaria and disease severity, including innate immunity such as blood group, hemoglobinopathy, and heme oxygenase-1 (HO-1) polymorphisms. This study was carried out to investigate association among ABO blood group, thalassemia types and HO-1 polymorphisms in malaria. The malarial blood samples were collected from patients along the Thai-Myanmar border. Determination of ABO blood group, thalassemia variants, and HO-1 polymorphisms were performed using agglutination test, low pressure liquid chromatography and polymerase chain reaction, respectively. Plasmodium vivax was the major infected malaria species in the study samples. Distribution of ABO blood type in the malaria-infected samples was similar to that in healthy subjects, of which blood type O being most prevalent. Association between blood group A and decreased risk of severe malaria was significant. Six thalassemia types (30%) were detected, i.e. , hemoglobin E (HbE), β-thalassemia, α-thalassemia 1, α-thalassemia 2, HbE with α-thalassemia 2, and β-thalassemia with α-thalassemia 2. Malaria infected samples without thalassemia showed significantly higher risk to severe malaria. The prevalence of HO-1 polymorphisms, S/S, S/L and L/L were 25, 62, and 13%, respectively. Further study with larger sample size is required to confirm the impact of these 3 host genetic factors in malaria patients.
Notes for Brazil sampling frame evaluation trip
NASA Technical Reports Server (NTRS)
Horvath, R. (Principal Investigator); Hicks, D. R. (Compiler)
1981-01-01
Field notes describing a trip conducted in Brazil are presented. This trip was conducted for the purpose of evaluating a sample frame developed using LANDSAT full frame images by the USDA Economic and Statistics Service for the eventual purpose of cropland production estimation with LANDSAT by the Foreign Commodity Production Forecasting Project of the AgRISTARS program. Six areas were analyzed on the basis of land use, crop land in corn and soybean, field size and soil type. The analysis indicated generally successful use of LANDSAT images for purposes of remote large area land use stratification.
Research and development of a luminol-carbon monoxide flow system
NASA Technical Reports Server (NTRS)
Thomas, R. R.
1977-01-01
Adaption of the luminol-carbon monoxide injection system to a flowing type system is reported. Analysis of actual wastewater samples was carried out and revealed that bacteria can be associated with particles greater than 10 microns in size in samples such as mixed liquor. Research into the luminol reactive oxidation state indicates that oxidized iron porphyrins, cytochrome-c in particular, produce more luminol chemiluminescence than the reduced form. Correlation exists between the extent of porphyrin oxidation and relative chemiluminescence. In addition, the porphyrin nucleus is apparently destroyed under the current chemiluminescent reaction conditions.
Morphology of the porous silicon obtained by electrochemical anodization method
NASA Astrophysics Data System (ADS)
Bertel H, S. D.; Dussán C, A.; Diaz P, J. M.
2018-04-01
In this report, the dependence of porous silicon with the synthesis parameters and their correlation with the optical and morphological properties is studied. The P-type silicon-crystalline samples and orientation <1 0 0> were prepared by electrochemical anodization and were characterized using SEM in order to know the evolution of the pore morphology. It was observed that the porosity and thickness of the samples increased with the increase of the concentration in the solution and a high pore density (70%) with a pore size between 40nm and 1.5μm.
Sharma, Rakesh
2010-07-21
Ex vivo magnetic resonance microimaging (MRM) image characteristics are reported in human skin samples in different age groups. Human excised skin samples were imaged using a custom coil placed inside a 500 MHz NMR imager for high-resolution microimaging. Skin MRI images were processed for characterization of different skin structures. Contiguous cross-sectional T1-weighted 3D spin echo MRI, T2-weighted 3D spin echo MRI and proton density images were compared with skin histopathology and NMR peaks. In all skin specimens, epidermis and dermis thickening and hair follicle size were measured using MRM. Optimized parameters TE and TR and multicontrast enhancement generated better MRI visibility of different skin components. Within high MR signal regions near to the custom coil, MRI images with short echo time were comparable with digitized histological sections for skin structures of the epidermis, dermis and hair follicles in 6 (67%) of the nine specimens. Skin % tissue composition, measurement of the epidermis, dermis, sebaceous gland and hair follicle size, and skin NMR peaks were signatures of skin type. The image processing determined the dimensionality of skin tissue components and skin typing. The ex vivo MRI images and histopathology of the skin may be used to measure the skin structure and skin NMR peaks with image processing may be a tool for determining skin typing and skin composition.
NASA Astrophysics Data System (ADS)
Sharma, Rakesh
2010-07-01
Ex vivo magnetic resonance microimaging (MRM) image characteristics are reported in human skin samples in different age groups. Human excised skin samples were imaged using a custom coil placed inside a 500 MHz NMR imager for high-resolution microimaging. Skin MRI images were processed for characterization of different skin structures. Contiguous cross-sectional T1-weighted 3D spin echo MRI, T2-weighted 3D spin echo MRI and proton density images were compared with skin histopathology and NMR peaks. In all skin specimens, epidermis and dermis thickening and hair follicle size were measured using MRM. Optimized parameters TE and TR and multicontrast enhancement generated better MRI visibility of different skin components. Within high MR signal regions near to the custom coil, MRI images with short echo time were comparable with digitized histological sections for skin structures of the epidermis, dermis and hair follicles in 6 (67%) of the nine specimens. Skin % tissue composition, measurement of the epidermis, dermis, sebaceous gland and hair follicle size, and skin NMR peaks were signatures of skin type. The image processing determined the dimensionality of skin tissue components and skin typing. The ex vivo MRI images and histopathology of the skin may be used to measure the skin structure and skin NMR peaks with image processing may be a tool for determining skin typing and skin composition.
Application and testing of a procedure to evaluate transferability of habitat suitability criteria
Thomas, Jeff A.; Bovee, Ken D.
1993-01-01
A procedure designed to test the transferability of habitat suitability criteria was evaluated in the Cache la Poudre River, Colorado. Habitat suitability criteria were developed for active adult and juvenile rainbow trout in the South Platte River, Colorado. These criteria were tested by comparing microhabitat use predicted from the criteria with observed microhabitat use by adult rainbow trout in the Cache la Poudre River. A one-sided X2 test, using counts of occupied and unoccupied cells in each suitability classification, was used to test for non-random selection for optimum habitat use over usable habitat and for suitable over unsuitable habitat. Criteria for adult rainbow trout were judged to be transferable to the Cache la Poudre River, but juvenile criteria (applied to adults) were not transferable. Random subsampling of occupied and unoccupied cells was conducted to determine the effect of sample size on the reliability of the test procedure. The incidence of type I and type II errors increased rapidly as the sample size was reduced below 55 occupied and 200 unoccupied cells. Recommended modifications to the procedure included the adoption of a systematic or randomized sampling design and direct measurement of microhabitat variables. With these modifications, the procedure is economical, simple and reliable. Use of the procedure as a quality assurance device in routine applications of the instream flow incremental methodology was encouraged.
Cremonini, F; Houghton, L A; Camilleri, M; Ferber, I; Fell, C; Cox, V; Castillo, E J; Alpers, D H; Dewit, O E; Gray, E; Lea, R; Zinsmeister, A R; Whorwell, P J
2005-12-01
We assessed reproducibility of measurements of rectal compliance and sensation in health in studies conducted at two centres. We estimated samples size necessary to show clinically meaningful changes in future studies. We performed rectal barostat tests three times (day 1, day 1 after 4 h and 14-17 days later) in 34 healthy participants. We measured compliance and pressure thresholds for first sensation, urgency, discomfort and pain using ascending method of limits and symptom ratings for gas, urgency, discomfort and pain during four phasic distensions (12, 24, 36 and 48 mmHg) in random order. Results obtained at the two centres differed minimally. Reproducibility of sensory end points varies with type of sensation, pressure level and method of distension. Pressure threshold for pain and sensory ratings for non-painful sensations at 36 and 48 mmHg distension were most reproducible in the two centres. Sample size calculations suggested that crossover design is preferable in therapeutic trials: for each dose of medication tested, a sample of 21 should be sufficient to demonstrate 30% changes in all sensory thresholds and almost all sensory ratings. We conclude that reproducibility varies with sensation type, pressure level and distension method, but in a two-centre study, differences in observed results of sensation are minimal and pressure threshold for pain and sensory ratings at 36-48 mmHg of distension are reproducible.
Sampling efficacy for the red imported fire ant Solenopsis invicta (Hymenoptera: Formicidae).
Stringer, Lloyd D; Suckling, David Maxwell; Baird, David; Vander Meer, Robert K; Christian, Sheree J; Lester, Philip J
2011-10-01
Cost-effective detection of invasive ant colonies before establishment in new ranges is imperative for the protection of national borders and reducing their global impact. We examined the sampling efficiency of food-baits and pitfall traps (baited and nonbaited) in detecting isolated red imported fire ant (Solenopsis invicta Buren) nests in multiple environments in Gainesville, FL. Fire ants demonstrated a significantly higher preference for a mixed protein food type (hotdog or ground meat combined with sweet peanut butter) than for the sugar or water baits offered. Foraging distance success was a function of colony size, detection trap used, and surveillance duration. Colony gyne number did not influence detection success. Workers from small nests (0- to 15-cm mound diameter) traveled no >3 m to a food source, whereas large colonies (>30-cm mound diameter) traveled up to 17 m. Baited pitfall traps performed best at detecting incipient ant colonies followed by nonbaited pitfall traps then food baits, whereas food baits performed well when trying to detect large colonies. These results were used to create an interactive model in Microsoft Excel, whereby surveillance managers can alter trap type, density, and duration parameters to estimate the probability of detecting specified or unknown S. invicta colony sizes. This model will support decision makers who need to balance the sampling cost and risk of failure to detect fire ant colonies.
Protection of obstetric dimensions in a small-bodied human sample.
Kurki, Helen K
2007-08-01
In human females, the bony pelvis must find a balance between being small (narrow) for efficient bipedal locomotion, and being large to accommodate a relatively large newborn. It has been shown that within a given population, taller/larger-bodied women have larger pelvic canals. This study investigates whether in a population where small body size is the norm, pelvic geometry (size and shape), on average, shows accommodation to protect the obstetric canal. Osteometric data were collected from the pelves, femora, and clavicles (body size indicators) of adult skeletons representing a range of adult body size. Samples include Holocene Later Stone Age (LSA) foragers from southern Africa (n = 28 females, 31 males), Portuguese from the Coimbra-identified skeletal collection (CISC) (n = 40 females, 40 males) and European-Americans from the Hamann-Todd osteological collection (H-T) (n = 40 females, 40 males). Patterns of sexual dimorphism are similar in the samples. Univariate and multivariate analyses of raw and Mosimann shape-variables indicate that compared to the CISC and H-T females, the LSA females have relatively large midplane and outlet canal planes (particularly posterior and A-P lengths). The LSA males also follow this pattern, although with absolutely smaller pelves in multivariate space. The CISC females, who have equally small stature, but larger body mass, do not show the same type of pelvic canal size and shape accommodation. The results suggest that adaptive allometric modeling in at least some small-bodied populations protects the obstetric canal. These findings support the use of population-specific attributes in the clinical evaluation of obstetric risk. (c) 2007 Wiley-Liss, Inc.