Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
40 CFR 90.706 - Engine sample selection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... = emission test result for an individual engine. x = mean of emission test results of the actual sample. FEL... test with the last test result from the previous model year and then calculate the required sample size.... Test results used to calculate the variables in the following Sample Size Equation must be final...
Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2016-01-01
This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…
Sample size and power for cost-effectiveness analysis (part 1).
Glick, Henry A
2011-03-01
Basic sample size and power formulae for cost-effectiveness analysis have been established in the literature. These formulae are reviewed and the similarities and differences between sample size and power for cost-effectiveness analysis and for the analysis of other continuous variables such as changes in blood pressure or weight are described. The types of sample size and power tables that are commonly calculated for cost-effectiveness analysis are also described and the impact of varying the assumed parameter values on the resulting sample size and power estimates is discussed. Finally, the way in which the data for these calculations may be derived are discussed.
Frictional behaviour of sandstone: A sample-size dependent triaxial investigation
NASA Astrophysics Data System (ADS)
Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus
2017-01-01
Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
Dombrowski, Kirk; Khan, Bilal; Wendel, Travis; McLean, Katherine; Misshula, Evan; Curtis, Ric
2012-12-01
As part of a recent study of the dynamics of the retail market for methamphetamine use in New York City, we used network sampling methods to estimate the size of the total networked population. This process involved sampling from respondents' list of co-use contacts, which in turn became the basis for capture-recapture estimation. Recapture sampling was based on links to other respondents derived from demographic and "telefunken" matching procedures-the latter being an anonymized version of telephone number matching. This paper describes the matching process used to discover the links between the solicited contacts and project respondents, the capture-recapture calculation, the estimation of "false matches", and the development of confidence intervals for the final population estimates. A final population of 12,229 was estimated, with a range of 8235 - 23,750. The techniques described here have the special virtue of deriving an estimate for a hidden population while retaining respondent anonymity and the anonymity of network alters, but likely require larger sample size than the 132 persons interviewed to attain acceptable confidence levels for the estimate.
Whole Brain Size and General Mental Ability: A Review
Rushton, J. Philippe; Ankney, C. Davison
2009-01-01
We review the literature on the relation between whole brain size and general mental ability (GMA) both within and between species. Among humans, in 28 samples using brain imaging techniques, the mean brain size/GMA correlation is 0.40 (N = 1,389; p < 10−10); in 59 samples using external head size measures it is 0.20 (N = 63,405; p < 10−10). In 6 samples using the method of correlated vectors to distill g, the general factor of mental ability, the mean r is 0.63. We also describe the brain size/GMA correlations with age, socioeconomic position, sex, and ancestral population groups, which also provide information about brain–behavior relationships. Finally, we examine brain size and mental ability from an evolutionary and behavior genetic perspective. PMID:19283594
Probability of coincidental similarity among the orbits of small bodies - I. Pairing
NASA Astrophysics Data System (ADS)
Jopek, Tadeusz Jan; Bronikowska, Małgorzata
2017-09-01
Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Choi, Yoonha; Liu, Tiffany Ting; Pankratz, Daniel G; Colby, Thomas V; Barth, Neil M; Lynch, David A; Walsh, P Sean; Raghu, Ganesh; Kennedy, Giulia C; Huang, Jing
2018-05-09
We developed a classifier using RNA sequencing data that identifies the usual interstitial pneumonia (UIP) pattern for the diagnosis of idiopathic pulmonary fibrosis. We addressed significant challenges, including limited sample size, biological and technical sample heterogeneity, and reagent and assay batch effects. We identified inter- and intra-patient heterogeneity, particularly within the non-UIP group. The models classified UIP on transbronchial biopsy samples with a receiver-operating characteristic area under the curve of ~ 0.9 in cross-validation. Using in silico mixed samples in training, we prospectively defined a decision boundary to optimize specificity at ≥85%. The penalized logistic regression model showed greater reproducibility across technical replicates and was chosen as the final model. The final model showed sensitivity of 70% and specificity of 88% in the test set. We demonstrated that the suggested methodologies appropriately addressed challenges of the sample size, disease heterogeneity and technical batch effects and developed a highly accurate and robust classifier leveraging RNA sequencing for the classification of UIP.
Sample size in psychological research over the past 30 years.
Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B
2011-04-01
The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.
Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew
2017-12-01
Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference
Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.
2016-01-01
Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243
NASA Astrophysics Data System (ADS)
Cyprych, Daria; Piazolo, Sandra; Wilson, Christopher J. L.; Luzin, Vladimir; Prior, David J.
2016-09-01
We utilize in situ neutron diffraction to continuously track the average grain size and crystal preferred orientation (CPO) development in ice, during uniaxial compression of two-phase and pure ice samples. Two-phase samples are composed of ice matrix and 20 vol.% of second phases of two types: (1) rheologically soft, platy graphite, and (2) rigid, rhomb-shaped calcite. The samples were tested at 10 °C below the ice melting point, ambient pressures, and two strain rates (1 ×10-5 and 2.5 ×10-6 s-1), to 10 and 20% strain. The final CPO in the ice matrix, where second phases are present, is significantly weaker, and ice grain size is smaller than in an ice-only sample. The microstructural and rheological data point to dislocation creep as the dominant deformation regime. The evolution and final strength of the CPO in ice depend on the efficiency of the recrystallization processes, namely grain boundary migration and nucleation. These processes are markedly influenced by the strength, shape, and grain size of the second phase. In addition, CPO development in ice is further accentuated by strain partitioning into the soft second phase, and the transfer of stress onto the rigid second phase.
Polymorphism in magic-sized Au144(SR)60 clusters
Jensen, Kirsten M. O.; Juhas, Pavol; Tofanelli, Marcus A.; ...
2016-06-14
Ultra-small, magic-sized metal nanoclusters represent an important new class of materials with properties between molecules and particles. However, their small size challenges the conventional methods for structure characterization. We present the structure of ultra-stable Au144(SR)60 magic-sized nanoclusters obtained from atomic pair distribution function analysis of X-ray powder diffraction data. Our study reveals structural polymorphism in these archetypal nanoclusters. Additionally, in order to confirm the theoretically predicted icosahedral-cored cluster, we also find samples with a truncated decahedral core structure, with some samples exhibiting a coexistence of both cluster structures. Although the clusters are monodisperse in size, structural diversity is apparent. Finally,more » the discovery of polymorphism may open up a new dimension in nanoscale engineering.« less
The final 2008 lead (Pb) national ambient air quality standards (NAAQS) revision maintains Pb in total suspended particulate matter as the indicator. However, the final rule permits the use of low-volume PM10 (particulate matter sampled with a 50% cut-point of 10 μm) F...
Khlebtsov, Boris N; Khanadeev, Vitaly A; Khlebtsov, Nikolai G
2008-08-19
The size and concentration of silica cores determine the size and concentration of silica/gold nanoshells in final preparations. Until now, the concentration of silica/gold nanoshells with Stober's silica core has been evaluated through the material balance assumption. Here, we describe a method for simultaneous determination of the average size and concentration of silica nanospheres from turbidity spectra measured within the 400-600 nm spectral band. As the refractive index of silica nanoparticles is the key input parameter for optical determination of their concentration, we propose an optical method and provide experimental data on a direct determination of the refractive index of silica particles n = 1.475 +/- 0.005. Finally, we exemplify our method by determining the particle size and concentration for 10 samples and compare the results with transmission electron microscopy (TEM), atomic force microscopy (AFM), and dynamic light scattering data.
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.B.; Cushing, K.M.; Johnson, J.W.
1982-05-01
The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less
You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic.
McShane, Blakeley B; Böckenholt, Ulf
2014-11-01
Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation. © The Author(s) 2014.
78 FR 27406 - Agency Information Collection Activities; Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-10
... groups will be conducted with up to eight participants in each for a total sample size of 32. The second... determine eligibility for the pilot study to recruit a sample of 500 participants (50 from each clinical... participate in an in-depth, qualitative telephone interview for a total of 100 interviews. Finally, up to...
Walters, Stephen J; Bonacho Dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Jacques, Richard M; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A
2017-03-20
Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43-2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79-97%). There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Bonacho dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A
2017-01-01
Background Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. Objectives To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. Data sources and study selection HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Data extraction Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Main outcome measures Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). Results This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43–2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79–97%). Conclusions There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. PMID:28320800
Pneumatic System for Concentration of Micrometer-Size Lunar Soil
NASA Technical Reports Server (NTRS)
McKay, David; Cooper, Bonnie
2012-01-01
A report describes a size-sorting method to separate and concentrate micrometer- size dust from a broad size range of particles without using sieves, fluids, or other processes that may modify the composition or the surface properties of the dust. The system consists of four processing units connected in series by tubing. Samples of dry particulates such as lunar soil are introduced into the first unit, a fluidized bed. The flow of introduced nitrogen fluidizes the particulates and preferentially moves the finer grain sizes on to the next unit, a flat plate impactor, followed by a cyclone separator, followed by a Nuclepore polycarbonate filter to collect the dust. By varying the gas flow rate and the sizes of various orifices in the system, the size of the final and intermediate particles can be varied to provide the desired products. The dust can be collected from the filter. In addition, electron microscope grids can be placed on the Nuclepore filter for direct sampling followed by electron microscope characterization of the dust without further handling.
Boczkaj, Grzegorz; Przyjazny, Andrzej; Kamiński, Marian
2015-03-01
The paper describes a new procedure for the determination of boiling point distribution of high-boiling petroleum fractions using size-exclusion chromatography with refractive index detection. Thus far, the determination of boiling range distribution by chromatography has been accomplished using simulated distillation with gas chromatography with flame ionization detection. This study revealed that in spite of substantial differences in the separation mechanism and the detection mode, the size-exclusion chromatography technique yields similar results for the determination of boiling point distribution compared with simulated distillation and novel empty column gas chromatography. The developed procedure using size-exclusion chromatography has a substantial applicability, especially for the determination of exact final boiling point values for high-boiling mixtures, for which a standard high-temperature simulated distillation would have to be used. In this case, the precision of final boiling point determination is low due to the high final temperatures of the gas chromatograph oven and an insufficient thermal stability of both the gas chromatography stationary phase and the sample. Additionally, the use of high-performance liquid chromatography detectors more sensitive than refractive index detection allows a lower detection limit for high-molar-mass aromatic compounds, and thus increases the sensitivity of final boiling point determination. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Rechtin, Jack; Torresani, Elisa; Ivanov, Eugene; Olevsky, Eugene
2018-01-01
Spark Plasma Sintering (SPS) is used to fabricate Titanium-Niobium-Zirconium-Tantalum alloy (TNZT) powder—based bioimplant components with controllable porosity. The developed densification maps show the effects of final SPS temperature, pressure, holding time, and initial particle size on final sample relative density. Correlations between the final sample density and mechanical properties of the fabricated TNZT components are also investigated and microstructural analysis of the processed material is conducted. A densification model is proposed and used to calculate the TNZT alloy creep activation energy. The obtained experimental data can be utilized for the optimized fabrication of TNZT components with specific microstructural and mechanical properties suitable for biomedical applications. PMID:29364165
Internal pilots for a class of linear mixed models with Gaussian and compound symmetric data
Gurka, Matthew J.; Coffey, Christopher S.; Muller, Keith E.
2015-01-01
SUMMARY An internal pilot design uses interim sample size analysis, without interim data analysis, to adjust the final number of observations. The approach helps to choose a sample size sufficiently large (to achieve the statistical power desired), but not too large (which would waste money and time). We report on recent research in cerebral vascular tortuosity (curvature in three dimensions) which would benefit greatly from internal pilots due to uncertainty in the parameters of the covariance matrix used for study planning. Unfortunately, observations correlated across the four regions of the brain and small sample sizes preclude using existing methods. However, as in a wide range of medical imaging studies, tortuosity data have no missing or mistimed data, a factorial within-subject design, the same between-subject design for all responses, and a Gaussian distribution with compound symmetry. For such restricted models, we extend exact, small sample univariate methods for internal pilots to linear mixed models with any between-subject design (not just two groups). Planning a new tortuosity study illustrates how the new methods help to avoid sample sizes that are too small or too large while still controlling the type I error rate. PMID:17318914
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-05
... groups will be conducted with up to eight participants in each for a total sample size of 32. The second... determine eligibility for the pilot study to recruit a sample of 500 participants (50 from each clinical... to participate in an in-depth, qualitative telephone interview for a total of 100 interviews. Finally...
A System Approach to Navy Medical Education and Training. Appendix 18. Radiation Technician.
1974-08-31
attrition was forecast to approximate twenty percent, final sample and sub-sample sizes were adjusted accordingly. Stratified random sampling... HYPERTENSIVE INTRAVENOUS PYELOGRAMS 2 ITAKE RENAL LOOPOGRAMI I 3 ITAKE CIXU, I.Eo CONSTANT INFUSION 4 10 RENAL SPLIT FUNCTION TEST, E.G. STAMEY 5...ITAKE PORTAL FILM OF AREA BEING TREATED WITH COBALT 32 [INFORM DOCTOR OF UNEXPECTED X-RAY FINDINGS 33 IREAD X-RAY FILMS FOR TECHNICAL ADEQUACY 34
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
Copper Decoration of Carbon Nanotubes and High Resolution Electron Microscopy
NASA Astrophysics Data System (ADS)
Probst, Camille
A new process of decorating carbon nanotubes with copper was developed for the fabrication of nanocomposite aluminum-nanotubes. The process consists of three stages: oxidation, activation and electroless copper plating on the nanotubes. The oxidation step was required to create chemical function on the nanotubes, essential for the activation step. Then, catalytic nanoparticles of tin-palladium were deposited on the tubes. Finally, during the electroless copper plating, copper particles with a size between 20 and 60 nm were uniformly deposited on the nanotubes surface. The reproducibility of the process was shown by using another type of carbon nanotube. The fabrication of nanocomposites aluminum-nanotubes was tested by aluminum vacuum infiltration. Although the infiltration of carbon nanotubes did not produce the expected results, an interesting electron microscopy sample was discovered during the process development: the activated carbon nanotubes. Secondly, scanning transmitted electron microscopy (STEM) imaging in SEM was analysed. The images were obtained with a new detector on the field emission scanning electron microscope (Hitachi S-4700). Various parameters were analysed with the use of two different samples: the activated carbon nanotubes (previously obtained) and gold-palladium nanodeposits. Influences of working distance, accelerating voltage or sample used on the spatial resolution of images obtained with SMART (Scanning Microscope Assessment and Resolution Testing) were analysed. An optimum working distance for the best spatial resolution related to the sample analysed was found for the imaging in STEM mode. Finally, relation between probe size and spatial resolution of backscattered electrons (BSE) images was studied. An image synthesis method was developed to generate the BSE images from backscattered electrons coefficients obtained with CASINO software. Spatial resolution of images was determined using SMART. The analysis shown that using a probe size smaller than the size of the observed object (sample features) does not improve the spatial resolution. In addition, the effects of the accelerating voltage, the current intensity and the sample geometry and composition were analysed.
Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran
2010-05-01
Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.
Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland
2017-01-01
For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. PMID:27639623
Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O
2004-07-30
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Austin, N. J.; Evans, B.; Dresen, G. H.; Rybacki, E.
2009-12-01
Deformed rocks commonly consist of several mineral phases, each with dramatically different mechanical properties. In both naturally and experimentally deformed rocks, deformation mechanisms and, in turn, strength, are commonly investigated by analyzing microstructural elements such as crystallographic preferred orientation (CPO) and recrystallized grain size. Here, we investigated the effect of variations in the volume fraction and the geometry of rigid second phases on the strength and evolution of CPO and grain size of synthetic calcite rocks. Experiments using triaxial compression and torsional loading were conducted at 1023 K and equivalent strain rates between ~2e-6 and 1e-3 s-1. The second phases in these synthetic assemblages are rigid carbon spheres or splinters with known particle size distributions and geometries, which are chemically inert at our experimental conditions. Under hydrostatic conditions, the addition of as little as 1 vol.% carbon spheres poisons normal grain growth. Shape is also important: for an equivalent volume fraction and grain dimension, carbon splinters result in a finer calcite grain size than carbon spheres. In samples deformed at “high” strain rates, or which have “large” mean free spacing of the pinning phase, the final recrystallized grain size is well explained by competing grain growth and grain size reduction processes, where the grain-size reduction rate is determined by the rate that mechanical work is done during deformation. In these samples, the final grain size is finer than in samples heat-treated hydrostatically for equivalent durations. The addition of 1 vol.% spheres to calcite has little effect on either the strength or CPO development. Adding 10 vol.% splinters increases the strength at low strains and low strain rates, but has little effect on the strength at high strains and/or high strain rates, compared to pure samples. A CPO similar to that in pure samples is observed, although the intensity is reduced in samples containing 10 vol.% splinters. When 10 vol.% spheres are added to calcite, the strength of the aggregate is reduced, and a distinct and strong CPO develops. Viscoplastic self consistent calculations were used to model the evolution of CPO in these materials, and these suggest a variation in the activity of the various slip systems within pure samples and those containing 10 vol.% spheres. The applicability of these laboratory observations has been tested with field-based observations made in the Morcles Nappe (Swiss Helvetic Alps). In the Morcles Nappe, calcite grain size becomes progressively finer as the thrust contact is approached, and there is a concomitant increase in CPO intensity, with the strongest CPO’s in the finest-grained, quartz-rich limestones, nearest the thrust contact, which are interpreted to have been deformed to the highest strains. Thus, our laboratory results may be used to provide insight into the distribution of strain observed in natural shear zones.
Designing a multiple dependent state sampling plan based on the coefficient of variation.
Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan
2016-01-01
A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
NASA Astrophysics Data System (ADS)
Jux, Maximilian; Finke, Benedikt; Mahrholz, Thorsten; Sinapius, Michael; Kwade, Arno; Schilde, Carsten
2017-04-01
Several epoxy Al(OH)O (boehmite) dispersions in an epoxy resin are produced in a kneader to study the mechanistic correlation between the nanoparticle size and mechanical properties of the prepared nanocomposites. The agglomerate size is set by a targeted variation in solid content and temperature during dispersion, resulting in a different level of stress intensity and thus a different final agglomerate size during the process. The suspension viscosity was used for the estimation of stress energy in laminar shear flow. Agglomerate size measurements are executed via dynamic light scattering to ensure the quality of the produced dispersions. Furthermore, various nanocomposite samples are prepared for three-point bending, tension, and fracture toughness tests. The screening of the size effect is executed with at least seven samples per agglomerate size and test method. The variation of solid content is found to be a reliable method to adjust the agglomerate size between 138-354 nm during dispersion. The size effect on the Young's modulus and the critical stress intensity is only marginal. Nevertheless, there is a statistically relevant trend showing a linear increase with a decrease in agglomerate size. In contrast, the size effect is more dominant to the sample's strain and stress at failure. Unlike microscaled agglomerates or particles, which lead to embrittlement of the composite material, nanoscaled agglomerates or particles cause the composite elongation to be nearly of the same level as the base material. The observed effect is valid for agglomerate sizes between 138-354 nm and a particle mass fraction of 10 wt%.
A novel measure of effect size for mediation analysis.
Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken
2018-06-01
Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Chaudhuri, S. K.; Ghosh, Manoranjan; Das, D.; Raychaudhuri, A. K.
2010-09-01
The present article describes the size induced changes in the structural arrangement of intrinsic defects present in chemically synthesized ZnO nanoparticles of various sizes. Routine x-ray diffraction and transmission electron microscopy have been performed to determine the shapes and sizes of the nanocrystalline ZnO samples. Detailed studies using positron annihilation spectroscopy reveals the presence of zinc vacancy. Whereas analysis of photoluminescence results predict the signature of charged oxygen vacancies. The size induced changes in positron parameters as well as the photoluminescence properties, has shown contrasting or nonmonotonous trends as size varies from 4 to 85 nm. Small spherical particles below a critical size (˜23 nm) receive more positive surface charge due to the higher occupancy of the doubly charge oxygen vacancy as compared to the bigger nanostructures where singly charged oxygen vacancy predominates. This electronic alteration has been seen to trigger yet another interesting phenomenon, described as positron confinement inside nanoparticles. Finally, based on all the results, a model of the structural arrangement of the intrinsic defects in the present samples has been reconciled.
Koltun, G.F.; Helsel, Dennis R.
1986-01-01
Identical stream-bottom material samples, when fractioned to the same size by different techniques, may contain significantly different trace-metal concentrations. Precision of techniques also may differ, which could affect the ability to discriminate between size-fractioned bottom-material samples having different metal concentrations. Bottom-material samples fractioned to less than 0.020 millimeters by means of three common techniques (air elutriation, sieving, and settling) were analyzed for six trace metals to determine whether the technique used to obtain the desired particle-size fraction affects the ability to discriminate between bottom materials having different trace-metal concentrations. In addition, this study attempts to assess whether median trace-metal concentrations in size-fractioned bottom materials of identical origin differ depending on the size-fractioning technique used. Finally, this study evaluates the efficiency of the three size-fractioning techniques in terms of time, expense, and effort involved. Bottom-material samples were collected at two sites in northeastern Ohio: One is located in an undeveloped forested basin, and the other is located in a basin having a mixture of industrial and surface-mining land uses. The sites were selected for their close physical proximity, similar contributing drainage areas, and the likelihood that trace-metal concentrations in the bottom materials would be significantly different. Statistically significant differences in the concentrations of trace metals were detected between bottom-material samples collected at the two sites when the samples had been size-fractioned by means of air elutriation or sieving. Statistical analyses of samples that had been size fractioned by settling in native water were not measurably different in any of the six trace metals analyzed. Results of multiple comparison tests suggest that differences related to size-fractioning technique were evident in median copper, lead, and iron concentrations. Technique-related differences in copper concentrations most likely resulted from contamination of air-elutriated samples by a feed tip on the elutriator apparatus. No technique-related differences were observed in chromium, manganese, or zinc concentrations. Although air elutriation was the most expensive sizefractioning technique investigated, samples fractioned by this technique appeared to provide a superior level of discrimination between metal concentrations present in the bottom materials of the two sites. Sieving was an adequate lower-cost but more laborintensive alternative.
Tamburini, Elena; Vincenzi, Fabio; Costa, Stefania; Mantovi, Paolo; Pedrini, Paola; Castaldelli, Giuseppe
2017-10-17
Near-Infrared Spectroscopy is a cost-effective and environmentally friendly technique that could represent an alternative to conventional soil analysis methods, including total organic carbon (TOC). Soil fertility and quality are usually measured by traditional methods that involve the use of hazardous and strong chemicals. The effects of physical soil characteristics, such as moisture content and particle size, on spectral signals could be of great interest in order to understand and optimize prediction capability and set up a robust and reliable calibration model, with the future perspective of being applied in the field. Spectra of 46 soil samples were collected. Soil samples were divided into three data sets: unprocessed, only dried and dried, ground and sieved, in order to evaluate the effects of moisture and particle size on spectral signals. Both separate and combined normalization methods including standard normal variate (SNV), multiplicative scatter correction (MSC) and normalization by closure (NCL), as well as smoothing using first and second derivatives (DV1 and DV2), were applied to a total of seven cases. Pretreatments for model optimization were designed and compared for each data set. The best combination of pretreatments was achieved by applying SNV and DV2 on partial least squares (PLS) modelling. There were no significant differences between the predictions using the three different data sets ( p < 0.05). Finally, a unique database including all three data sets was built to include all the sources of sample variability that were tested and used for final prediction. External validation of TOC was carried out on 16 unknown soil samples to evaluate the predictive ability of the final combined calibration model. Hence, we demonstrate that sample preprocessing has minor influence on the quality of near infrared spectroscopy (NIR) predictions, laying the ground for a direct and fast in situ application of the method. Data can be acquired outside the laboratory since the method is simple and does not need more than a simple band ratio of the spectra.
Urey, Carlos; Weiss, Victor U; Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland
2016-11-20
For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Glicerina, Virginia; Balestra, Federica; Dalla Rosa, Marco; Bergenhstål, Bjorn; Tornberg, Eva; Romani, Santina
2014-07-01
The effect of different process stages on microstructural and visual properties of dark chocolate was studied. Samples were obtained at each phase of the manufacture process: mixing, prerefining, refining, conching, and tempering. A laser light diffraction technique and environmental scanning electron microscopy (ESEM) were used to study the particle size distribution (PSD) and to analyze modifications in the network structure. Moreover, colorimetric analyses (L*, h°, and C*) were performed on all samples. Each stage influenced in stronger way the microstructural characteristic of products and above all the PSD. Sauter diameter (D [3.2]) decreased from 5.44 μm of mixed chocolate sample to 3.83 μm, of the refined one. ESEM analysis also revealed wide variations in the network structure of samples during the process, with an increase of the aggregation and contact point between particles from mixing to refining stage. Samples obtained from the conching and tempering were characterized by small PS, and a less dense aggregate structure. From color results, samples with the finest particles, having larger specific surface area and the smallest diameter, appeared lighter and more saturated than those with coarse particles. Final quality of food dispersions is affected by network and particles characteristics. The deep knowledge of the influence of single processing stage on chocolate microstructural properties is useful in order to improve or modify final product characteristics. ESEM and laser diffraction are suitable techniques to study changes in chocolate microstructure. © 2014 Institute of Food Technologists®
Lui, Kung-Jong; Chang, Kuang-Chao
2015-01-01
In studies of screening accuracy, we may commonly encounter the data in which a confirmatory procedure is administered to only those subjects with screen positives for ethical concerns. We focus our discussion on simultaneously testing equality of sensitivity and specificity between two binary screening tests when only subjects with screen positives receive the confirmatory procedure. We develop four asymptotic test procedures and one exact test procedure. We derive sample size calculation formula for a desired power of detecting a difference at a given nominal [Formula: see text]-level. We employ Monte Carlo simulation to evaluate the performance of these test procedures and the accuracy of the sample size calculation formula developed here in a variety of situations. Finally, we use the data obtained from a study of the prostate-specific-antigen test and digital rectal examination test on 949 Black men to illustrate the practical use of these test procedures and the sample size calculation formula.
Spiegelhalter, D J; Freedman, L S
1986-01-01
The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.
Feng, Dai; Cortese, Giuliana; Baumgartner, Richard
2017-12-01
The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.
2011-01-01
To obtain approval for the use vertebrate animals in research, an investigator must assure an ethics committee that the proposed number of animals is the minimum necessary to achieve a scientific goal. How does an investigator make that assurance? A power analysis is most accurate when the outcome is known before the study, which it rarely is. A ‘pilot study’ is appropriate only when the number of animals used is a tiny fraction of the numbers that will be invested in the main study because the data for the pilot animals cannot legitimately be used again in the main study without increasing the rate of type I errors (false discovery). Traditional significance testing requires the investigator to determine the final sample size before any data are collected and then to delay analysis of any of the data until all of the data are final. An investigator often learns at that point either that the sample size was larger than necessary or too small to achieve significance. Subjects cannot be added at this point in the study without increasing type I errors. In addition, journal reviewers may require more replications in quantitative studies than are truly necessary. Sequential stopping rules used with traditional significance tests allow incremental accumulation of data on a biomedical research problem so that significance, replicability, and use of a minimal number of animals can be assured without increasing type I errors. PMID:21838970
NASA Astrophysics Data System (ADS)
Liu, Yangyang; Li, Jiheng; Gao, Xuexu
2017-08-01
Magnetostrictive Fe82Ga4.5Al13.5 sheets with 0.1 at% NbC were prepared from directional solidified alloys with <0 0 1> preferred orientation. The slabs were hot rolled at 650 °C and warm rolled at 500 °C. Then some warm-rolled sheets were annealed intermediately at 850 °C for 5 min but the others not. After that, all the sheets were cold rolled to a final thickness of ∼0.3 mm. The microstructures, the textures and the distributions of second phase particles in the primary recrystallized samples were investigated. With intermediate annealing, the inhomogeneous microstructure was improved remarkably and strong Goss ({1 1 0}<0 0 1>) and γ-fiber (<1 1 1>//normal direction [ND]) textures were produced in the primary recrystallized samples. But, an evident disadvantage in size and quantity was observed for Goss grains in the primary recrystallized sample without intermediate annealing. After a final annealing, the final textures and magnetostrictions of samples with and without intermediate annealing were characterized. For samples without intermediate annealing, abnormal growth of {1 1 3} grains occurred and deteriorated the magnetostriction. In contrast, abnormal Goss grain growth occurred completely in samples with intermediate annealing and led to saturation magnetostriction as high as 156 ppm.
Does the choice of nucleotide substitution models matter topologically?
Hoff, Michael; Orf, Stefan; Riehm, Benedikt; Darriba, Diego; Stamatakis, Alexandros
2016-03-24
In the context of a master level programming practical at the computer science department of the Karlsruhe Institute of Technology, we developed and make available an open-source code for testing all 203 possible nucleotide substitution models in the Maximum Likelihood (ML) setting under the common Akaike, corrected Akaike, and Bayesian information criteria. We address the question if model selection matters topologically, that is, if conducting ML inferences under the optimal, instead of a standard General Time Reversible model, yields different tree topologies. We also assess, to which degree models selected and trees inferred under the three standard criteria (AIC, AICc, BIC) differ. Finally, we assess if the definition of the sample size (#sites versus #sites × #taxa) yields different models and, as a consequence, different tree topologies. We find that, all three factors (by order of impact: nucleotide model selection, information criterion used, sample size definition) can yield topologically substantially different final tree topologies (topological difference exceeding 10 %) for approximately 5 % of the tree inferences conducted on the 39 empirical datasets used in our study. We find that, using the best-fit nucleotide substitution model may change the final ML tree topology compared to an inference under a default GTR model. The effect is less pronounced when comparing distinct information criteria. Nonetheless, in some cases we did obtain substantial topological differences.
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
2017-01-01
Synchronization of population dynamics in different habitats is a frequently observed phenomenon. A common mathematical tool to reveal synchronization is the (cross)correlation coefficient between time courses of values of the population size of a given species where the population size is evaluated from spatial sampling data. The corresponding sampling net or grid is often coarse, i.e. it does not resolve all details of the spatial configuration, and the evaluation error—i.e. the difference between the true value of the population size and its estimated value—can be considerable. We show that this estimation error can make the value of the correlation coefficient very inaccurate or even irrelevant. We consider several population models to show that the value of the correlation coefficient calculated on a coarse sampling grid rarely exceeds 0.5, even if the true value is close to 1, so that the synchronization is effectively lost. We also observe ‘ghost synchronization’ when the correlation coefficient calculated on a coarse sampling grid is close to 1 but in reality the dynamics are not correlated. Finally, we suggest a simple test to check the sampling grid coarseness and hence to distinguish between the true and artifactual values of the correlation coefficient. PMID:28202589
Eduardoff, Mayra; Xavier, Catarina; Strobl, Christina; Casas-Vargas, Andrea; Parson, Walther
2017-01-01
The analysis of mitochondrial DNA (mtDNA) has proven useful in forensic genetics and ancient DNA (aDNA) studies, where specimens are often highly compromised and DNA quality and quantity are low. In forensic genetics, the mtDNA control region (CR) is commonly sequenced using established Sanger-type Sequencing (STS) protocols involving fragment sizes down to approximately 150 base pairs (bp). Recent developments include Massively Parallel Sequencing (MPS) of (multiplex) PCR-generated libraries using the same amplicon sizes. Molecular genetic studies on archaeological remains that harbor more degraded aDNA have pioneered alternative approaches to target mtDNA, such as capture hybridization and primer extension capture (PEC) methods followed by MPS. These assays target smaller mtDNA fragment sizes (down to 50 bp or less), and have proven to be substantially more successful in obtaining useful mtDNA sequences from these samples compared to electrophoretic methods. Here, we present the modification and optimization of a PEC method, earlier developed for sequencing the Neanderthal mitochondrial genome, with forensic applications in mind. Our approach was designed for a more sensitive enrichment of the mtDNA CR in a single tube assay and short laboratory turnaround times, thus complying with forensic practices. We characterized the method using sheared, high quantity mtDNA (six samples), and tested challenging forensic samples (n = 2) as well as compromised solid tissue samples (n = 15) up to 8 kyrs of age. The PEC MPS method produced reliable and plausible mtDNA haplotypes that were useful in the forensic context. It yielded plausible data in samples that did not provide results with STS and other MPS techniques. We addressed the issue of contamination by including four generations of negative controls, and discuss the results in the forensic context. We finally offer perspectives for future research to enable the validation and accreditation of the PEC MPS method for final implementation in forensic genetic laboratories. PMID:28934125
Online submicron particle sizing by dynamic light scattering using autodilution
NASA Technical Reports Server (NTRS)
Nicoli, David F.; Elings, V. B.
1989-01-01
Efficient production of a wide range of commercial products based on submicron colloidal dispersions would benefit from instrumentation for online particle sizing, permitting real time monitoring and control of the particle size distribution. Recent advances in the technology of dynamic light scattering (DLS), especially improvements in algorithms for inversion of the intensity autocorrelation function, have made it ideally suited to the measurement of simple particle size distributions in the difficult submicron region. Crucial to the success of an online DSL based instrument is a simple mechanism for automatically sampling and diluting the starting concentrated sample suspension, yielding a final concentration which is optimal for the light scattering measurement. A proprietary method and apparatus was developed for performing this function, designed to be used with a DLS based particle sizing instrument. A PC/AT computer is used as a smart controller for the valves in the sampler diluter, as well as an input-output communicator, video display and data storage device. Quantitative results are presented for a latex suspension and an oil-in-water emulsion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huie, Matthew M.; Marschilok, Amy C.; Takeuchi, Esther S.
Here, this report describes a synthetic approach to control the crystallite size of silver vanadium phosphorous oxide, Ag 0.50VOPO 4·1.9H 2O, and the impact on electrochemistry in lithium based batteries. Ag 0.50VOPO 4·1.9H 2O was synthesized using a stirred hydrothermal method over a range of temperatures. X-ray diffraction (XRD) was used to confirm the crystalline phase and the crystallite size sizes of 11, 22, 38, 40, 49, and 120 nm. Particle shape was plate-like with edges <1 micron to >10 microns. Under galvanostatic reduction the samples with 22 nm crystallites and 880 nm particles produced the highest capacity, ~25% moremore » capacity than the 120 nm sample. Notably, the 11 nm sample resulted in reduced delivered capacity and higher resistance consistent with increased grain boundaries contributing to resistance. Under intermittent pulsing ohmic resistance decreased with increasing crystallite size from 11 nm to 120 nm implying that electrical conduction within a crystal is more facile than between crystallites and across grain boundaries. Finally, this systematic study of material dimension shows that crystallite size impacts deliverable capacity as well as cell resistance where both interparticle and intraparticle transport are important.« less
Huie, Matthew M.; Marschilok, Amy C.; Takeuchi, Esther S.; ...
2017-04-12
Here, this report describes a synthetic approach to control the crystallite size of silver vanadium phosphorous oxide, Ag 0.50VOPO 4·1.9H 2O, and the impact on electrochemistry in lithium based batteries. Ag 0.50VOPO 4·1.9H 2O was synthesized using a stirred hydrothermal method over a range of temperatures. X-ray diffraction (XRD) was used to confirm the crystalline phase and the crystallite size sizes of 11, 22, 38, 40, 49, and 120 nm. Particle shape was plate-like with edges <1 micron to >10 microns. Under galvanostatic reduction the samples with 22 nm crystallites and 880 nm particles produced the highest capacity, ~25% moremore » capacity than the 120 nm sample. Notably, the 11 nm sample resulted in reduced delivered capacity and higher resistance consistent with increased grain boundaries contributing to resistance. Under intermittent pulsing ohmic resistance decreased with increasing crystallite size from 11 nm to 120 nm implying that electrical conduction within a crystal is more facile than between crystallites and across grain boundaries. Finally, this systematic study of material dimension shows that crystallite size impacts deliverable capacity as well as cell resistance where both interparticle and intraparticle transport are important.« less
Structural properties and gas sensing behavior of sol-gel grown nanostructured zinc oxide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajyaguru, Bhargav; Gadani, Keval; Kansara, S. B.
2016-05-06
In this communication, we report the results of the studies on structural properties and gas sensing behavior of nanostructured ZnO grown using acetone precursor based modified sol-gel technique. Final product of ZnO was sintered at different temperatures to vary the crystallite size while their structural properties have been studied using X-ray diffraction (XRD) measurement performed at room temperature. XRD results suggest the single phasic nature of all the samples and crystallite size increases from 11.53 to 20.96 nm with increase in sintering temperature. Gas sensing behavior has been studied for acetone gas which indicates that lower sintered samples are moremore » capable to sense the acetone gas and related mechanism has been discussed in the light of crystallite size, crystal boundary density, defect mechanism and possible chemical reaction between gas traces and various oxygen species.« less
A Longitudinal View of the Relationship Between Social Marginalization and Obesity
NASA Astrophysics Data System (ADS)
Apolloni, Andrea; Marathe, Achla; Pan, Zhengzheng
We use 3 Waves of the Add Health data collected between 1994 and 2002 to conduct a longitudinal study of the relationship between social marginalization and the weight status of adolescents and young adults. Past studies have shown that overweight and obese children are socially marginalized. This research tests (1) if this is true when we account for the sample size of each group, (2) does this phenomenon hold over time and (3) is it obesity or social marginalization that precedes in time. Our results show that when the sample size for each group is considered, the share of friendship is conforming to the size of the group. This conformity seems to increase over time as the population becomes more obese. Finally, we find that obesity precedes social marginalization which lends credence to the notion that obesity causes social marginalization and not vice versa.
Liu, Jingxia; Colditz, Graham A
2018-05-01
There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Structural Equation Model for Predicting Business Student Performance
ERIC Educational Resources Information Center
Pomykalski, James J.; Dion, Paul; Brock, James L.
2008-01-01
In this study, the authors developed a structural equation model that accounted for 79% of the variability of a student's final grade point average by using a sample size of 147 students. The model is based on student grades in 4 foundational business courses: introduction to business, macroeconomics, statistics, and using databases. Educators and…
NASA Technical Reports Server (NTRS)
Stocker, P. J.; Marcus, H. L. (Inventor)
1977-01-01
A drift compensated and intensity averaged extensometer for measuring the diameter or other properties of a substantially cylindrical sample based upon the shadow of the sample is described. A beam of laser light is shaped to provide a beam with a uniform intensity along an axis normal to the sample. After passing the sample, the portion of the beam not striking said sample is divided by a beam splitter into a reference signal and a measurement signal. Both of these beams are then chopped by a light chopper to fall upon two photodiode detectors. The resulting ac currents are rectified and then divided into one another, with the final output being proportional to the size of the sample shadow.
Robust gene selection methods using weighting schemes for microarray data analysis.
Kang, Suyeon; Song, Jongwoo
2017-09-02
A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.
How many stakes are required to measure the mass balance of a glacier?
Fountain, A.G.; Vecchia, A.
1999-01-01
Glacier mass balance is estimated for South Cascade Glacier and Maclure Glacier using a one-dimensional regression of mass balance with altitude as an alternative to the traditional approach of contouring mass balance values. One attractive feature of regression is that it can be applied to sparse data sets where contouring is not possible and can provide an objective error of the resulting estimate. Regression methods yielded mass balance values equivalent to contouring methods. The effect of the number of mass balance measurements on the final value for the glacier showed that sample sizes as small as five stakes provided reasonable estimates, although the error estimates were greater than for larger sample sizes. Different spatial patterns of measurement locations showed no appreciable influence on the final value as long as different surface altitudes were intermittently sampled over the altitude range of the glacier. Two different regression equations were examined, a quadratic, and a piecewise linear spline, and comparison of results showed little sensitivity to the type of equation. These results point to the dominant effect of the gradient of mass balance with altitude of alpine glaciers compared to transverse variations. The number of mass balance measurements required to determine the glacier balance appears to be scale invariant for small glaciers and five to ten stakes are sufficient.
Gravity or turbulence? IV. Collapsing cores in out-of-virial disguise
NASA Astrophysics Data System (ADS)
Ballesteros-Paredes, Javier; Vázquez-Semadeni, Enrique; Palau, Aina; Klessen, Ralf S.
2018-06-01
We study the dynamical state of massive cores by using a simple analytical model, an observational sample, and numerical simulations of collapsing massive cores. From the analytical model, we find that cores increase their column density and velocity dispersion as they collapse, resulting in a time evolution path in the Larson velocity dispersion-size diagram from large sizes and small velocity dispersions to small sizes and large velocity dispersions, while they tend to equipartition between gravity and kinetic energy. From the observational sample, we find that: (a) cores with substantially different column densities in the sample do not follow a Larson-like linewidth-size relation. Instead, cores with higher column densities tend to be located in the upper-left corner of the Larson velocity dispersion σv, 3D-size R diagram, a result explained in the hierarchical and chaotic collapse scenario. (b) Cores appear to have overvirial values. Finally, our numerical simulations reproduce the behavior predicted by the analytical model and depicted in the observational sample: collapsing cores evolve towards larger velocity dispersions and smaller sizes as they collapse and increase their column density. More importantly, however, they exhibit overvirial states. This apparent excess is due to the assumption that the gravitational energy is given by the energy of an isolated homogeneous sphere. However, such excess disappears when the gravitational energy is correctly calculated from the actual spatial mass distribution. We conclude that the observed energy budget of cores is consistent with their non-thermal motions being driven by their self-gravity and in the process of dynamical collapse.
NASA Astrophysics Data System (ADS)
Fattah-alhosseini, Arash; Ansari, Ali Reza; Mazaheri, Yousef; Karimi, Mohsen
2017-02-01
In this study, the electrochemical behavior of commercial pure titanium with both coarse-grained (annealed sample with the average grain size of about 45 µm) and nano-grained microstructure was compared by potentiodynamic polarization, electrochemical impedance spectroscopy (EIS), and Mott-Schottky analysis. Nano-grained Ti, which typically has a grain size of about 90 nm, is successfully made by six-cycle accumulative roll-bonding process at room temperature. Potentiodynamic polarization plots and impedance measurements revealed that as a result of grain refinement, the passive behavior of the nano-grained sample was improved compared to that of annealed pure Ti in H2SO4 solutions. Mott-Schottky analysis indicated that the passive films behaved as n-type semiconductors in H2SO4 solutions and grain refinement did not change the semiconductor type of passive films. Also, Mott-Schottky analysis showed that the donor densities decreased as the grain size of the samples reduced. Finally, all electrochemical tests showed that the electrochemical behavior of the nano-grained sample was improved compared to that of annealed pure Ti, mainly due to the formation of thicker and less defective oxide film.
Zafra, C A; Temprano, J; Tejero, I
2011-07-01
The heavy metal pollution caused by road run-off water constitutes a problem in urban areas. The metallic load associated with road sediment must be determined in order to study its impact in drainage systems and receiving waters, and to perfect the design of prevention systems. This paper presents data regarding the sediment collected on road surfaces in the city of Torrelavega (northern Spain) during a period of 65 days (132 samples). Two sample types were collected: vacuum-dried samples and those swept up following vacuuming. The sediment loading (g m(-2)), particle size distribution (63-2800 microm) and heavy metal concentrations were determined. The data showed that the concentration of heavy metals tends to increase with the reduction in the particle diameter (exponential tendency). The concentrations ofPb, Zn, Cu, Cr, Ni, Cd, Fe, Mn and Co in the size fraction <63 microm were 350, 630, 124, 57, 56, 38, 3231, 374 and 51 mg kg(-1), respectively (average traffic density: 3800 vehicles day(-1)). By increasing the residence time of the sediment, the concentration increases, whereas the ratio of the concentration between the different size fractions decreases. The concentration across the road diminishes when the distance between the roadway and the sampling siteincreases; when the distance increases, the ratio between size fractions for heavy metal concentrations increases. Finally, the main sources of heavy metals are the particles detached by braking (brake pads) and tyre wear (rubber), and are associated with particle sizes <125 microm.
Khan, Bilal; Lee, Hsuan-Wei; Fellows, Ian; Dombrowski, Kirk
2018-01-01
Size estimation is particularly important for populations whose members experience disproportionate health issues or pose elevated health risks to the ambient social structures in which they are embedded. Efforts to derive size estimates are often frustrated when the population is hidden or hard-to-reach in ways that preclude conventional survey strategies, as is the case when social stigma is associated with group membership or when group members are involved in illegal activities. This paper extends prior research on the problem of network population size estimation, building on established survey/sampling methodologies commonly used with hard-to-reach groups. Three novel one-step, network-based population size estimators are presented, for use in the context of uniform random sampling, respondent-driven sampling, and when networks exhibit significant clustering effects. We give provably sufficient conditions for the consistency of these estimators in large configuration networks. Simulation experiments across a wide range of synthetic network topologies validate the performance of the estimators, which also perform well on a real-world location-based social networking data set with significant clustering. Finally, the proposed schemes are extended to allow them to be used in settings where participant anonymity is required. Systematic experiments show favorable tradeoffs between anonymity guarantees and estimator performance. Taken together, we demonstrate that reasonable population size estimates are derived from anonymous respondent driven samples of 250-750 individuals, within ambient populations of 5,000-40,000. The method thus represents a novel and cost-effective means for health planners and those agencies concerned with health and disease surveillance to estimate the size of hidden populations. We discuss limitations and future work in the concluding section.
Zinc-Nucleated D 2 and H 2 Crystal Formation from Their Liquids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernat, T. P.; Petta, N.; Kozioziemski, B.
Calorimetric measurements at University of Rochester Laboratory for Laser Energetics of D 2 crystallization from the melt indicate that zinc can act as a heterogeneous nucleation seed with suppressed supercooling. We further studied in this paper this effect for a variety of zinc substrates using the optical-access cryogenic sample cell at Lawrence Livermore National Laboratory. Small supercoolings are observed, some as low as 5 mK, but results depend on the zinc history and sample preparation. In general, thin samples prepared by physical vapor deposition were not effective in nucleating crystal formation. Larger (several-millimeter) granules showed greater supercooling suppression, depending onmore » surface modification and granule size. Surfaces of these granules are morphologically varied and not uniform. Scanning electron microscope images were not able to correlate any particular surface feature with enhanced nucleation. Finally, application of classical nucleation theory to the observed variation of supercooling level with granule size is consistent with nucleation features with sizes <100 nm and with wetting angles of a few degrees.« less
Zinc-Nucleated D 2 and H 2 Crystal Formation from Their Liquids
Bernat, T. P.; Petta, N.; Kozioziemski, B.; ...
2016-09-01
Calorimetric measurements at University of Rochester Laboratory for Laser Energetics of D 2 crystallization from the melt indicate that zinc can act as a heterogeneous nucleation seed with suppressed supercooling. We further studied in this paper this effect for a variety of zinc substrates using the optical-access cryogenic sample cell at Lawrence Livermore National Laboratory. Small supercoolings are observed, some as low as 5 mK, but results depend on the zinc history and sample preparation. In general, thin samples prepared by physical vapor deposition were not effective in nucleating crystal formation. Larger (several-millimeter) granules showed greater supercooling suppression, depending onmore » surface modification and granule size. Surfaces of these granules are morphologically varied and not uniform. Scanning electron microscope images were not able to correlate any particular surface feature with enhanced nucleation. Finally, application of classical nucleation theory to the observed variation of supercooling level with granule size is consistent with nucleation features with sizes <100 nm and with wetting angles of a few degrees.« less
Synthesis and characterization of nanocrystalline Co-Fe-Nb-Ta-B alloy
NASA Astrophysics Data System (ADS)
Raanaei, Hossein; Fakhraee, Morteza
2017-09-01
In this research work, structural and magnetic evolution of Co57Fe13Nb8Ta4B18 alloy, during mechanical alloying process, have been investigated by using, X-ray diffraction, scanning electron microscopy, transmission electron microscopy, electron dispersive X-ray spectroscopy, differential thermal analysis and also vibrating sample magnetometer. It is observed that at 120 milling time, the crystallite size reaches to about 7.8 nm. Structural analyses show that, the solid solution of the initial powder mixture occurs at160 h milling time. The coercivity behavior demonstrates a rise, up to 70 h followed by decreasing tendency up to final stage of milling process. Thermal analysis of 160 h milling time sample reveals two endothermic peaks. The characterization of annealed milled sample for 160 h milling time at 427 °C shows crystallite size growth accompanied by increasing in saturation magnetization.
Wang, Guan-Jie; Tian, Li; Fan, Yu-Ming; Qi, Mei-Ling
2013-01-01
A rapid headspace single-drop microextraction gas chromatography mass spectrometry (SDME-GC-MS) for the analysis of the volatile compounds in Herba Asari was developed in this study. The extraction solvent, extraction temperature and time, sample amount, and particle size were optimized. A mixed solvent of n-tridecane and butyl acetate (1 : 1) was finally used for the extraction with sample amount of 0.750 g and 100-mesh particle size at 70°C for 15 min. Under the determined conditions, the pound samples of Herba Asari were directly applied for the analysis. The result showed that SDME-GC–MS method was a simple, effective, and inexpensive way to measure the volatile compounds in Herba Asari and could be used for the analysis of volatile compounds in Chinese medicine. PMID:23607049
Optimizing image registration and infarct definition in stroke research.
Harston, George W J; Minks, David; Sheerin, Fintan; Payne, Stephen J; Chappell, Michael; Jezzard, Peter; Jenkinson, Mark; Kennedy, James
2017-03-01
Accurate representation of final infarct volume is essential for assessing the efficacy of stroke interventions in imaging-based studies. This study defines the impact of image registration methods used at different timepoints following stroke, and the implications for infarct definition in stroke research. Patients presenting with acute ischemic stroke were imaged serially using magnetic resonance imaging. Infarct volume was defined manually using four metrics: 24-h b1000 imaging; 1-week and 1-month T2-weighted FLAIR; and automatically using predefined thresholds of ADC at 24 h. Infarct overlap statistics and volumes were compared across timepoints following both rigid body and nonlinear image registration to the presenting MRI. The effect of nonlinear registration on a hypothetical trial sample size was calculated. Thirty-seven patients were included. Nonlinear registration improved infarct overlap statistics and consistency of total infarct volumes across timepoints, and reduced infarct volumes by 4.0 mL (13.1%) and 7.1 mL (18.2%) at 24 h and 1 week, respectively, compared to rigid body registration. Infarct volume at 24 h, defined using a predetermined ADC threshold, was less sensitive to infarction than b1000 imaging. 1-week T2-weighted FLAIR imaging was the most accurate representation of final infarct volume. Nonlinear registration reduced hypothetical trial sample size, independent of infarct volume, by an average of 13%. Nonlinear image registration may offer the opportunity of improving the accuracy of infarct definition in serial imaging studies compared to rigid body registration, helping to overcome the challenges of anatomical distortions at subacute timepoints, and reducing sample size for imaging-based clinical trials.
2004-04-15
Comparison of ground-based (left) and Skylab (right) electron beam welds in pure tantalum (Ta) (10X magnification). Residual votices left behind in the ground-based sample after the electron beam passed were frozen into the grain structure. These occurred because of the rapid cooling rate at the high temperature. Although the thermal characteristics and electron beam travel speeds were comparable for the skylab sample, the residual vortices were erased in the grain structure. This may have been due to the fact that final grain size of the solidified material was smaller in the Skylab sample compared to the ground-based sample. The Skylab sample was processed in the M512 Materials Processing Facility (MPF) during Skylab SL-2 Mission. Principal Investigator was Richard Poorman.
Johnston, Lisa G; Hakim, Avi J; Dittrich, Samantha; Burnett, Janet; Kim, Evelyn; White, Richard G
2016-08-01
Reporting key details of respondent-driven sampling (RDS) survey implementation and analysis is essential for assessing the quality of RDS surveys. RDS is both a recruitment and analytic method and, as such, it is important to adequately describe both aspects in publications. We extracted data from peer-reviewed literature published through September, 2013 that reported collected biological specimens using RDS. We identified 151 eligible peer-reviewed articles describing 222 surveys conducted in seven regions throughout the world. Most published surveys reported basic implementation information such as survey city, country, year, population sampled, interview method, and final sample size. However, many surveys did not report essential methodological and analytical information for assessing RDS survey quality, including number of recruitment sites, seeds at start and end, maximum number of waves, and whether data were adjusted for network size. Understanding the quality of data collection and analysis in RDS is useful for effectively planning public health service delivery and funding priorities.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Roustaei, Narges; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf
2018-01-01
In recent years, the joint models have been widely used for modeling the longitudinal and time-to-event data simultaneously. In this study, we proposed an approach (PA) to study the longitudinal and survival outcomes simultaneously in heterogeneous populations. PA relaxes the assumption of conditional independence (CI). We also compared PA with joint latent class model (JLCM) and separate approach (SA) for various sample sizes (150, 300, and 600) and different association parameters (0, 0.2, and 0.5). The average bias of parameters estimation (AB-PE), average SE of parameters estimation (ASE-PE), and coverage probability of the 95% confidence interval (CP) among the three approaches were compared. In most cases, when the sample sizes increased, AB-PE and ASE-PE decreased for the three approaches, and CP got closer to the nominal level of 0.95. When there was a considerable association, PA in comparison with SA and JLCM performed better in the sense that PA had the smallest AB-PE and ASE-PE for the longitudinal submodel among the three approaches for the small and moderate sample sizes. Moreover, JLCM was desirable for the none-association and the large sample size. Finally, the evaluated approaches were applied on a real HIV/AIDS dataset for validation, and the results were compared.
ERIC Educational Resources Information Center
Rudner, Lawrence
2016-01-01
In the machine learning literature, it is commonly accepted as fact that as calibration sample sizes increase, Naïve Bayes classifiers initially outperform Logistic Regression classifiers in terms of classification accuracy. Applied to subtests from an on-line final examination and from a highly regarded certification examination, this study shows…
ERIC Educational Resources Information Center
Porter, Sallie; Qureshi, Rubab; Caldwell, Barbara Ann; Echevarria, Mercedes; Dubbs, William B.; Sullivan, Margaret W.
2016-01-01
This study used a survey approach to investigate current developmental surveillance and developmental screening practices by pediatric primary care providers in a diverse New Jersey county. A total of 217 providers were contacted with a final sample size of 57 pediatric primary care respondents from 13 different municipalities. Most providers…
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Alternative sample sizes for verification dose experiments and dose audits
NASA Astrophysics Data System (ADS)
Taylor, W. A.; Hansen, J. M.
1999-01-01
ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.
Improved capacitance characteristics of electrospun ACFs by pore size control and vanadium catalyst.
Im, Ji Sun; Woo, Sang-Wook; Jung, Min-Jung; Lee, Young-Seak
2008-11-01
Nano-sized carbon fibers were prepared by using electrospinning, and their electrochemical properties were investigated as a possible electrode material for use as an electric double-layer capacitor (EDLC). To improve the electrode capacitance of EDLC, we implemented a three-step optimization. First, metal catalyst was introduced into the carbon fibers due to the excellent conductivity of metal. Vanadium pentoxide was used because it could be converted to vanadium for improved conductivity as the pore structure develops during the carbonization step. Vanadium catalyst was well dispersed in the carbon fibers, improving the capacitance of the electrode. Second, pore-size development was manipulated to obtain small mesopore sizes ranging from 2 to 5 nm. Through chemical activation, carbon fibers with controlled pore sizes were prepared with a high specific surface and pore volume, and their pore structure was investigated by using a BET apparatus. Finally, polyacrylonitrile was used as a carbon precursor to enrich for nitrogen content in the final product because nitrogen is known to improve electrode capacitance. Ultimately, the electrospun activated carbon fibers containing vanadium show improved functionality in charge/discharge, cyclic voltammetry, and specific capacitance compared with other samples because of an optimal combination of vanadium, nitrogen, and fixed pore structures.
Dumouchelle, D.H.; De Roche, Jeffrey T.
1991-01-01
Wright-Patterson Air Force Base, in southwestern Ohio, overlies a buried-valley aquifer. The U.S. Geological Survey installed 35 observation wells at 13 sites on the base from fall 1988 through spring 1990. Fourteen of the wells were completed in bedrock; the remaining wells were completed in unconsolidated sediments. Split-spoon and bedrock cores were collected from all of the bedrock wells. Shelby-tube samples were collected from four wells. The wells were drilled by either the cable-tool or rotary method. Data presented in this report include lithologic and natural-gamma logs, and, for selected sediment samples, grain-size distributions of permeability. Final well-construction details, such as the total depth of well, screened interval, and grouting details, also are presented.
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-01-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
NASA Astrophysics Data System (ADS)
Lee, Jung-Won; Mehran, Muhammad Taqi; Song, Rak-Hyun; Lee, Seung-Bok; Lee, Jong-Won; Lim, Tak-Hyoung; Park, Seok-Joo; Hong, Jong-Eun; Shim, Joon-Hyung
2017-11-01
We developed oxide-dispersed alloys as interconnect materials for a solid oxide fuel cell by adding La2O3 to SUS430 ferritic steels. For this purpose, we prepared two types of La2O3 with different particle sizes and added different amounts of La2O3 to SUS430 powder. Then, we mixed the powders using a high energy ball mill, so that nano-sized as well as micro-sized oxide particles were able to mix uniformly with the SUS430 powders. After preparing hexahedral green samples using uni-axial and cold isostatic presses, we were finally able to obtain oxide-dispersed alloys having high relative densities after firing at 1,400 °C under hydrogen atmosphere. The nano-sized La2O3 dispersed alloys showed properties superior to those of micro-sized dispersed alloys in terms of long-term stability and thermal cycling. Moreover, we determined the optimum amounts of added La2O3. Finally we were able to develop a new oxide-dispersed alloy showing excellent properties of low area specific resistance (16.23 mΩ cm2) after 1000 h at 800 °C, and no degradation after 10 iterations of thermal cycling under oxidizing atmosphere.
Lakens, Daniël
2013-01-01
Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA's such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow. PMID:24324449
75 FR 80117 - Methods for Measurement of Filterable PM10
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-21
...This action promulgates amendments to Methods 201A and 202. The final amendments to Method 201A add a particle-sizing device to allow for sampling of particulate matter with mean aerodynamic diameters less than or equal to 2.5 micrometers (PM2.5 or fine particulate matter). The final amendments to Method 202 revise the sample collection and recovery procedures of the method to reduce the formation of reaction artifacts that could lead to inaccurate measurements of condensable particulate matter. Additionally, the final amendments to Method 202 eliminate most of the hardware and analytical options in the existing method, thereby increasing the precision of the method and improving the consistency in the measurements obtained between source tests performed under different regulatory authorities. This action also announces that EPA is taking no action to affect the already established January 1, 2011 sunset date for the New Source Review (NSR) transition period, during which EPA is not requiring that State NSR programs address condensable particulate matter emissions.
Michaud, S; Levasseur, M; Doucette, G; Cantin, G
2002-10-01
We determined the seasonal distribution of paralytic shellfish toxins (PSTs) and PST producing bacteria in > 15, 5-15, and 0.22-5 microm size fractions in the St Lawrence. We also measured PSTs in a local population of Mytilus edulis. PST concentrations were determined in each size fraction and in laboratory incubations of sub-samples by high performance liquid chromatography (HPLC), including the rigorous elimination of suspected toxin 'imposter' peaks. Mussel toxin levels were determined by mouse bioassay and HPLC. PSTs were detected in all size fractions during the summer sampling season, with 47% of the water column toxin levels associated with particles smaller than Alexandrium tamarense (< 15 microm). Even in the > 15 microm size fraction, we estimated that as much as 92% of PSTs could be associated with particles other than A. tamarense. Our results stress the importance of taking into account the potential presence of PSTs in size fractions other than that containing the known algal producer when attempting to model shellfish intoxication, especially during years of low cell abundance. Finally, our HPLC results confirmed the presence of bacteria capable of autonomous PST production in the St Lawrence as well as demonstrating their regular presence and apparent diversity in the plankton. Copyright 2002 Elsevier Science Ltd.
Measurements of Regolith Simulant Thermal Conductivity Under Asteroid and Mars Surface Conditions
NASA Astrophysics Data System (ADS)
Ryan, A. J.; Christensen, P. R.
2017-12-01
Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under rough to high vacuum and across a wide range of temperatures. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and eventually cementation. We present the experimental data and model results for a suite of samples that were selected to isolate and address regolith physical parameters that affect bulk conductivity. Spherical glass beads of various sizes were used to measure the effect of size frequency distribution. Spherical beads of polypropylene and well-rounded quartz sand have respectively lower and higher solid phase thermal conductivities than the glass beads and thus provide the opportunity to test the sensitivity of bulk conductivity to differences in solid phase conductivity. Gas pressure in our asteroid experimental chambers is held at 10^-6 torr, which is sufficient to negate gas thermal conduction in even our coarsest of samples. On Mars, the atmospheric pressure is such that the mean free path of the gas molecules is comparable to the pore size for many regolith particulates. Thus, subtle variations in pore size and/or atmospheric pressure can produce large changes in bulk regolith conductivity. For each sample measured in our martian environmental chamber, we repeat thermal measurement runs at multiple pressures to observe this behavior. Finally, we present conductivity measurements of angular basaltic simulant that is physically analogous to sand and gravel that may be present on Bennu. This simulant was used for OSIRIS-REx TAGSAM Sample Return Arm engineering tests. We measure the original size frequency distribution as well as several sorted size fractions. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe for the spacecraft and meets grain size requirements for sampling.
Haverkamp, Nicolas; Beauducel, André
2017-01-01
We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.
Diagnosing hyperuniformity in two-dimensional, disordered, jammed packings of soft spheres.
Dreyfus, Remi; Xu, Ye; Still, Tim; Hough, L A; Yodh, A G; Torquato, Salvatore
2015-01-01
Hyperuniformity characterizes a state of matter for which (scaled) density fluctuations diminish towards zero at the largest length scales. However, the task of determining whether or not an image of an experimental system is hyperuniform is experimentally challenging due to finite-resolution, noise, and sample-size effects that influence characterization measurements. Here we explore these issues, employing video optical microscopy to study hyperuniformity phenomena in disordered two-dimensional jammed packings of soft spheres. Using a combination of experiment and simulation we characterize the possible adverse effects of particle polydispersity, image noise, and finite-size effects on the assignment of hyperuniformity, and we develop a methodology that permits improved diagnosis of hyperuniformity from real-space measurements. The key to this improvement is a simple packing reconstruction algorithm that incorporates particle polydispersity to minimize the free volume. In addition, simulations show that hyperuniformity in finite-sized samples can be ascertained more accurately in direct space than in reciprocal space. Finally, our experimental colloidal packings of soft polymeric spheres are shown to be effectively hyperuniform.
Diagnosing hyperuniformity in two-dimensional, disordered, jammed packings of soft spheres
NASA Astrophysics Data System (ADS)
Dreyfus, Remi; Xu, Ye; Still, Tim; Hough, L. A.; Yodh, A. G.; Torquato, Salvatore
2015-01-01
Hyperuniformity characterizes a state of matter for which (scaled) density fluctuations diminish towards zero at the largest length scales. However, the task of determining whether or not an image of an experimental system is hyperuniform is experimentally challenging due to finite-resolution, noise, and sample-size effects that influence characterization measurements. Here we explore these issues, employing video optical microscopy to study hyperuniformity phenomena in disordered two-dimensional jammed packings of soft spheres. Using a combination of experiment and simulation we characterize the possible adverse effects of particle polydispersity, image noise, and finite-size effects on the assignment of hyperuniformity, and we develop a methodology that permits improved diagnosis of hyperuniformity from real-space measurements. The key to this improvement is a simple packing reconstruction algorithm that incorporates particle polydispersity to minimize the free volume. In addition, simulations show that hyperuniformity in finite-sized samples can be ascertained more accurately in direct space than in reciprocal space. Finally, our experimental colloidal packings of soft polymeric spheres are shown to be effectively hyperuniform.
Adjemian, Jennifer C Z; Girvetz, Evan H; Beckett, Laurel; Foley, Janet E
2006-01-01
More than 20 species of fleas in California are implicated as potential vectors of Yersinia pestis. Extremely limited spatial data exist for plague vectors-a key component to understanding where the greatest risks for human, domestic animal, and wildlife health exist. This study increases the spatial data available for 13 potential plague vectors by using the ecological niche modeling system Genetic Algorithm for Rule-Set Production (GARP) to predict their respective distributions. Because the available sample sizes in our data set varied greatly from one species to another, we also performed an analysis of the robustness of GARP by using the data available for flea Oropsylla montana (Baker) to quantify the effects that sample size and the chosen explanatory variables have on the final species distribution map. GARP effectively modeled the distributions of 13 vector species. Furthermore, our analyses show that all of these modeled ranges are robust, with a sample size of six fleas or greater not significantly impacting the percentage of the in-state area where the flea was predicted to be found, or the testing accuracy of the model. The results of this study will help guide the sampling efforts of future studies focusing on plague vectors.
Methane Leaks from Natural Gas Systems Follow Extreme Distributions
Brandt, Adam R.; Heath, Garvin A.; Cooley, Daniel
2016-10-14
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ~15,000 measurements from 18 prior studies, we show that all available natural gas leakage datasets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of themore » total leakage volume. While prior studies used lognormal model distributions, we show that lognormal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of datasets to increase sample size is not recommended due to apparent deviation between sampled populations. Finally, understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.« less
Methane Leaks from Natural Gas Systems Follow Extreme Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Adam R.; Heath, Garvin A.; Cooley, Daniel
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ~15,000 measurements from 18 prior studies, we show that all available natural gas leakage datasets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of themore » total leakage volume. While prior studies used lognormal model distributions, we show that lognormal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of datasets to increase sample size is not recommended due to apparent deviation between sampled populations. Finally, understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.« less
Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data
Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924
Vogel, J.R.; Brown, G.O.
2003-01-01
Semivariograms of samples of Culebra Dolomite have been determined at two different resolutions for gamma ray computed tomography images. By fitting models to semivariograms, small-scale and large-scale correlation lengths are determined for four samples. Different semivariogram parameters were found for adjacent cores at both resolutions. Relative elementary volume (REV) concepts are related to the stationarity of the sample. A scale disparity factor is defined and is used to determine sample size required for ergodic stationarity with a specified correlation length. This allows for comparison of geostatistical measures and representative elementary volumes. The modifiable areal unit problem is also addressed and used to determine resolution effects on correlation lengths. By changing resolution, a range of correlation lengths can be determined for the same sample. Comparison of voxel volume to the best-fit model correlation length of a single sample at different resolutions reveals a linear scaling effect. Using this relationship, the range of the point value semivariogram is determined. This is the range approached as the voxel size goes to zero. Finally, these results are compared to the regularization theory of point variables for borehole cores and are found to be a better fit for predicting the volume-averaged range.
Total selenium in irrigation drain inflows to the Salton Sea, California, April 2009
May, Thomas W.; Walther, Michael J.; Saiki, Michael K.; Brumbaugh, William G.
2009-01-01
This report presents the results for the final sampling period (April 2009) of a 4-year monitoring program to characterize selenium concentrations in selected irrigation drains flowing into the Salton Sea, California. Total selenium and total suspended solids were determined in water samples. Total selenium, percent total organic carbon, and particle size were determined in sediments. Mean total selenium concentrations in water ranged from 0.98 to 22.9 micrograms per liter. Total selenium concentrations in sediment ranged from 0.078 to 5.0 micrograms per gram dry weight.
Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique
2014-01-01
Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates.
Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique
2014-01-01
Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates. PMID:24489839
NASA Astrophysics Data System (ADS)
White, Benjamin C.; Behbahanian, Amir; Stoker, T. McKay; Fowlkes, Jason D.; Hartnett, Chris; Rack, Phillip D.; Roberts, Nicholas A.
2018-03-01
Nanoparticles on a substrate have numerous applications in nanotechnology, from enhancements to solar cell efficiency to improvements in carbon nanotube growth. Producing nanoparticles in a cost effective fashion with control over size and spacing is desired, but difficult to do. This work presents a scalable method for altering the radius and pitch distributions of nickel nanoparticles. The introduction of alumina capping layers to thin nickel films during a pulsed laser-induced dewetting process has yielded reductions in the mean and standard deviation of radii and pitch for dewet nanoparticles with no noticeable difference in final morphology with increased capping layer thickness. The differences in carbon nanotube mats grown, on the uncapped sample and one of the capped samples, is also presented here, with a more dense mat being present for the capped case.
NASA Astrophysics Data System (ADS)
Rodriguez-Calvillo, P.; Leunis, E.; Van De Putte, T.; Jacobs, S.; Zacek, O.; Saikaly, W.
2018-04-01
The industrial production route of Grain Oriented Electrical Steels (GOES) is complex and fine-tuned for each grade. Its metallurgical process requires in all cases the abnormal grain growth (AGG) of the Goss orientation during the final high temperature annealing (HTA). The exact mechanism of AGG is not yet fully understood, but is controlled by the different inhibition systems, namely MnS, AlN and CuxS, their size and distribution, and the initial primary recrystallized grain size. Therefore, among other parameters, the initial heating stage during the HTA is crucial for the proper development of primary and secondary recrystallized microstructures. Cold rolled 0.3 mm Cu-bearing Grain Oriented Electrical Steel has been submitted to interrupted annealing experiments in a lab tubular furnace. Two different annealing cycles were applied:• Constant heating at 30°C/h up to 1000°C. Two step cycle with initial heating at 100°C/h up to 600°C, followed by 18 h soaking at 600°C and then heating at 30°C/h up to 1050°C. The materials are analyzed in terms of their magnetic properties, grain size, texture and precipitates. The characteristic magnetic properties are analyzed for the different extraction temperatures and Cycles. As the annealing was progressing, the coercivity values (Hc 1.7T [A/m]) decreased, showing two abrupt drops, which can be associated to the on-set of primary and secondary recrystallization. The primary recrystallized grain sizes and recrystallized fractions are fitted to a model using a non-isothermal approach. This analysis shows that, although the resulting grain sizes were similar, the kinetics for the two step annealing were faster due to the lower recovery. The on-set of secondary recrystallization was also shifted to higher temperatures in the case of the continuous heating cycle, which might end in different final grain sizes and final magnetic properties. In both samples, nearly all the observed precipitates are Al-Si-Mn nitrides, ranging from pure AlN to Si4Mn-nitride.
Densmore, Brenda K.; Rus, David L.; Moser, Matthew T.; Hall, Brent M.; Andersen, Michael J.
2016-02-04
Comparisons of concentrations and loads from EWI samples collected from different transects within a study site resulted in few significant differences, but comparisons are limited by small sample sizes and large within-transect variability. When comparing the Missouri River upstream transect to the chute inlet transect, similar results were determined in 2012 as were determined in 2008—the chute inlet affected the amount of sediment entering the chute from the main channel. In addition, the Kansas chute is potentially affecting the sediment concentration within the Missouri River main channel, but small sample size and construction activities within the chute limit the ability to fully understand either the effect of the chute in 2012 or the effect of the chute on the main channel during a year without construction. Finally, some differences in SSC were detected between the Missouri River upstream transects and the chute downstream transects; however, the effect of the chutes on the Missouri River main-channel sediment transport was difficult to isolate because of construction activities and sampling variability.
Oillic, Samuel; Lemoine, Eric; Gros, Jean-Bernard; Kondjoyan, Alain
2011-07-01
Cooking loss kinetics were measured on cubes and parallelepipeds of beef Semimembranosus muscle ranging from 1 cm × 1 cm × 1 cm to 7 cm × 7 cm × 28 cm in size. The samples were water bath-heated at three different temperatures, i.e. 50°C, 70°C and 90°C, and for five different times. Temperatures were simulated to help interpret the results. Pre-freezing the sample, difference in ageing time, and in muscle fiber orientation had little influence on cooking losses. At longer treatment times, the effects of sample size disappeared and cooking losses depended only on the temperature. A selection of the tests was repeated on four other beef muscles and on veal, horse and lamb Semimembranosus muscle. Kinetics followed similar curves in all cases but resulted in different final water contents. The shape of the kinetics curves suggests first-order kinetics. Copyright © 2011 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.
Summary and Synthesis: How to Present a Research Proposal.
Setia, Maninder Singh; Panda, Saumya
2017-01-01
This concluding module attempts to synthesize the key learning points discussed during the course of the previous ten sets of modules on methodology and biostatistics. The objective of this module is to discuss how to present a model research proposal, based on whatever was discussed in the preceding modules. The lynchpin of a research proposal is the protocol, and the key component of a protocol is the study design. However, one must not neglect the other areas, be it the project summary through which one catches the eyes of the reviewer of the proposal, or the background and the literature review, or the aims and objectives of the study. Two critical areas in the "methods" section that cannot be emphasized more are the sampling strategy and a formal estimation of sample size. Without a legitimate sample size, none of the conclusions based on the statistical analysis would be valid. Finally, the ethical parameters of the study should be well understood by the researchers, and that should get reflected in the proposal.
Summary and Synthesis: How to Present a Research Proposal
Setia, Maninder Singh; Panda, Saumya
2017-01-01
This concluding module attempts to synthesize the key learning points discussed during the course of the previous ten sets of modules on methodology and biostatistics. The objective of this module is to discuss how to present a model research proposal, based on whatever was discussed in the preceding modules. The lynchpin of a research proposal is the protocol, and the key component of a protocol is the study design. However, one must not neglect the other areas, be it the project summary through which one catches the eyes of the reviewer of the proposal, or the background and the literature review, or the aims and objectives of the study. Two critical areas in the “methods” section that cannot be emphasized more are the sampling strategy and a formal estimation of sample size. Without a legitimate sample size, none of the conclusions based on the statistical analysis would be valid. Finally, the ethical parameters of the study should be well understood by the researchers, and that should get reflected in the proposal. PMID:28979004
Simulation of parametric model towards the fixed covariate of right censored lung cancer data
NASA Astrophysics Data System (ADS)
Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila
2017-09-01
In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.
Ferromagnetic resonance studies of lunar core stratigraphy
NASA Technical Reports Server (NTRS)
Housley, R. M.; Cirlin, E. H.; Goldberg, I. B.; Crowe, H.
1976-01-01
We first review the evidence which links the characteristic ferromagnetic resonance observed in lunar fines samples with agglutinatic glass produced primarily by micrometeorite impacts and present new results on Apollo 15, 16, and 17 breccias which support this link by showing that only regolith breccias contribute significantly to the characteristic FMR intensity. We then provide a calibration of the amount of Fe metal in the form of uniformly magnetized spheres required to give our observed FMR intensities and discuss the theoretical magnetic behavior to be expected of Fe spheres as a function of size. Finally, we present FMR results on samples from every 5 mm interval in the core segments 60003, 60009, and 70009. These results lead us to suggest: (1) that secondary mixing may generally be extensive during regolith deposition so that buried regolith surfaces are hard to recognize or define; and (2) that local grinding of rocks and pebbles during deposition may lead to short scale fluctuations in grain size, composition, and apparent exposure age of samples.
New Measurements of the Particle Size Distribution of Apollo 11 Lunar Soil 10084
NASA Technical Reports Server (NTRS)
McKay, D.S.; Cooper, B.L.; Riofrio, L.M.
2009-01-01
We have initiated a major new program to determine the grain size distribution of nearly all lunar soils collected in the Apollo program. Following the return of Apollo soil and core samples, a number of investigators including our own group performed grain size distribution studies and published the results [1-11]. Nearly all of these studies were done by sieving the samples, usually with a working fluid such as Freon(TradeMark) or water. We have measured the particle size distribution of lunar soil 10084,2005 in water, using a Microtrac(TradeMark) laser diffraction instrument. Details of our own sieving technique and protocol (also used in [11]). are given in [4]. While sieving usually produces accurate and reproducible results, it has disadvantages. It is very labor intensive and requires hours to days to perform properly. Even using automated sieve shaking devices, four or five days may be needed to sieve each sample, although multiple sieve stacks increases productivity. Second, sieving is subject to loss of grains through handling and weighing operations, and these losses are concentrated in the finest grain sizes. Loss from handling becomes a more acute problem when smaller amounts of material are used. While we were able to quantitatively sieve into 6 or 8 size fractions using starting soil masses as low as 50mg, attrition and handling problems limit the practicality of sieving smaller amounts. Third, sieving below 10 or 20microns is not practical because of the problems of grain loss, and smaller grains sticking to coarser grains. Sieving is completely impractical below about 5- 10microns. Consequently, sieving gives no information on the size distribution below approx.10 microns which includes the important submicrometer and nanoparticle size ranges. Finally, sieving creates a limited number of size bins and may therefore miss fine structure of the distribution which would be revealed by other methods that produce many smaller size bins.
Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi
2011-04-01
Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.
ERIC Educational Resources Information Center
Zheng, Lanqin
2016-01-01
This meta-analysis examined research on the effects of self-regulated learning scaffolds on academic performance in computer-based learning environments from 2004 to 2015. A total of 29 articles met inclusion criteria and were included in the final analysis with a total sample size of 2,648 students. Moderator analyses were performed using a…
Influence of an Antiperspirant on Foot Blister Incidence during Cross-Country Hiking
1999-11-01
blisters also increases. Therefore reducing Moisture may reduce blister incidence during physical activity. Objective: We examined whether an antiperspirant ...that used either an antiperspirant (20% aluminum chloride hexahydrate in anhydrous ethyl alcohol) or placebo (anhydrous ethyl alcohol) preparation...blisters before and after. Results: Because of dropouts, the final sample size was 667 cadets with 328 in the antiperspirant group and 339 in the
NASA Astrophysics Data System (ADS)
Coscollà, Clara; Muñoz, Amalia; Borrás, Esther; Vera, Teresa; Ródenas, Milagros; Yusà, Vicent
2014-10-01
This work presents first data on the particle size distribution of 16 pesticides currently used in Mediterranean agriculture in the atmosphere. Particulate matter air samples were collected using a cascade impactor distributed into four size fractions in a rural site of Valencia Region, during July to September in 2012 and from May to July in 2013. A total of 16 pesticides were detected, including six fungicides, seven insecticides and three herbicides. The total concentrations in the particulate phase (TSP: Total Suspended Particulate) ranged from 3.5 to 383.1 pg m-3. Most of the pesticides (such as carbendazim, tebuconazole, chlorpyrifos-ethyl and chlorpyrifos-methyl) were accumulated in the ultrafine-fine (<1 μm) and coarse (2.5-10 μm) particle size fractions. Others like omethoate, dimethoate and malathion were presented only in the ultrafine-fine size fraction (<1 μm). Finally, diuron, diphenylamine and terbuthylazine-desethyl-2-OH also show a bimodal distribution but mainly in the coarse size fractions.
Petrovskaya, Natalia B.; Forbes, Emily; Petrovskii, Sergei V.; Walters, Keith F. A.
2018-01-01
Studies addressing many ecological problems require accurate evaluation of the total population size. In this paper, we revisit a sampling procedure used for the evaluation of the abundance of an invertebrate population from assessment data collected on a spatial grid of sampling locations. We first discuss how insufficient information about the spatial population density obtained on a coarse sampling grid may affect the accuracy of an evaluation of total population size. Such information deficit in field data can arise because of inadequate spatial resolution of the population distribution (spatially variable population density) when coarse grids are used, which is especially true when a strongly heterogeneous spatial population density is sampled. We then argue that the average trap count (the quantity routinely used to quantify abundance), if obtained from a sampling grid that is too coarse, is a random variable because of the uncertainty in sampling spatial data. Finally, we show that a probabilistic approach similar to bootstrapping techniques can be an efficient tool to quantify the uncertainty in the evaluation procedure in the presence of a spatial pattern reflecting a patchy distribution of invertebrates within the sampling grid. PMID:29495513
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses with LAI and clip harvest data to determine whether LAI can be used as a suitable proxy for aboveground standing biomass. We also compared optimal sample sizes derived from LAI data, and clip-harvest data from two different size clip harvest areas (0.1m by 1m vs. 0.1m by 2m). Sample sizes were calculated in order to estimate the mean to within a standardized level of uncertainty that will be used to guide sampling effort across all vegetation types (i.e. estimated within × 10% with 95% confidence). Finally, we employed a Semivariogram approach to determine optimal sample size and spacing.
Feltus, F Alex; Ficklin, Stephen P; Gibson, Scott M; Smith, Melissa C
2013-06-05
In genomics, highly relevant gene interaction (co-expression) networks have been constructed by finding significant pair-wise correlations between genes in expression datasets. These networks are then mined to elucidate biological function at the polygenic level. In some cases networks may be constructed from input samples that measure gene expression under a variety of different conditions, such as for different genotypes, environments, disease states and tissues. When large sets of samples are obtained from public repositories it is often unmanageable to associate samples into condition-specific groups, and combining samples from various conditions has a negative effect on network size. A fixed significance threshold is often applied also limiting the size of the final network. Therefore, we propose pre-clustering of input expression samples to approximate condition-specific grouping of samples and individual network construction of each group as a means for dynamic significance thresholding. The net effect is increase sensitivity thus maximizing the total co-expression relationships in the final co-expression network compendium. A total of 86 Arabidopsis thaliana co-expression networks were constructed after k-means partitioning of 7,105 publicly available ATH1 Affymetrix microarray samples. We term each pre-sorted network a Gene Interaction Layer (GIL). Random Matrix Theory (RMT), an un-supervised thresholding method, was used to threshold each of the 86 networks independently, effectively providing a dynamic (non-global) threshold for the network. The overall gene count across all GILs reached 19,588 genes (94.7% measured gene coverage) and 558,022 unique co-expression relationships. In comparison, network construction without pre-sorting of input samples yielded only 3,297 genes (15.9%) and 129,134 relationships. in the global network. Here we show that pre-clustering of microarray samples helps approximate condition-specific networks and allows for dynamic thresholding using un-supervised methods. Because RMT ensures only highly significant interactions are kept, the GIL compendium consists of 558,022 unique high quality A. thaliana co-expression relationships across almost all of the measurable genes on the ATH1 array. For A. thaliana, these networks represent the largest compendium to date of significant gene co-expression relationships, and are a means to explore complex pathway, polygenic, and pleiotropic relationships for this focal model plant. The networks can be explored at sysbio.genome.clemson.edu. Finally, this method is applicable to any large expression profile collection for any organism and is best suited where a knowledge-independent network construction method is desired.
2013-01-01
Background In genomics, highly relevant gene interaction (co-expression) networks have been constructed by finding significant pair-wise correlations between genes in expression datasets. These networks are then mined to elucidate biological function at the polygenic level. In some cases networks may be constructed from input samples that measure gene expression under a variety of different conditions, such as for different genotypes, environments, disease states and tissues. When large sets of samples are obtained from public repositories it is often unmanageable to associate samples into condition-specific groups, and combining samples from various conditions has a negative effect on network size. A fixed significance threshold is often applied also limiting the size of the final network. Therefore, we propose pre-clustering of input expression samples to approximate condition-specific grouping of samples and individual network construction of each group as a means for dynamic significance thresholding. The net effect is increase sensitivity thus maximizing the total co-expression relationships in the final co-expression network compendium. Results A total of 86 Arabidopsis thaliana co-expression networks were constructed after k-means partitioning of 7,105 publicly available ATH1 Affymetrix microarray samples. We term each pre-sorted network a Gene Interaction Layer (GIL). Random Matrix Theory (RMT), an un-supervised thresholding method, was used to threshold each of the 86 networks independently, effectively providing a dynamic (non-global) threshold for the network. The overall gene count across all GILs reached 19,588 genes (94.7% measured gene coverage) and 558,022 unique co-expression relationships. In comparison, network construction without pre-sorting of input samples yielded only 3,297 genes (15.9%) and 129,134 relationships. in the global network. Conclusions Here we show that pre-clustering of microarray samples helps approximate condition-specific networks and allows for dynamic thresholding using un-supervised methods. Because RMT ensures only highly significant interactions are kept, the GIL compendium consists of 558,022 unique high quality A. thaliana co-expression relationships across almost all of the measurable genes on the ATH1 array. For A. thaliana, these networks represent the largest compendium to date of significant gene co-expression relationships, and are a means to explore complex pathway, polygenic, and pleiotropic relationships for this focal model plant. The networks can be explored at sysbio.genome.clemson.edu. Finally, this method is applicable to any large expression profile collection for any organism and is best suited where a knowledge-independent network construction method is desired. PMID:23738693
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions
NASA Astrophysics Data System (ADS)
Dong, Miao L.; Goyal, Kashika G.; Worth, Bradley W.; Makkar, Sorab S.; Calhoun, William R.; Bali, Lalit M.; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions.
Dong, Miao L; Goyal, Kashika G; Worth, Bradley W; Makkar, Sorab S; Calhoun, William R; Bali, Lalit M; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Porosity characterization for heterogeneous shales using integrated multiscale microscopy
NASA Astrophysics Data System (ADS)
Rassouli, F.; Andrew, M.; Zoback, M. D.
2016-12-01
Pore size distribution analysis plays a critical role in gas storage capacity and fluid transport characterization of shales. Study of the diverse distribution of pore size and structure in such low permeably rocks is withheld by the lack of tools to visualize the microstructural properties of shale rocks. In this paper we try to use multiple techniques to investigate the full pore size range in different sample scales. Modern imaging techniques are combined with routine analytical investigations (x-ray diffraction, thin section analysis and mercury porosimetry) to describe pore size distribution of shale samples from Haynesville formation in East Texas to generate a more holistic understanding of the porosity structure in shales, ranging from standard core plug down to nm scales. Standard 1" diameter core plug samples were first imaged using a Versa 3D x-ray microscope at lower resolutions. Then we pick several regions of interest (ROIs) with various micro-features (such as micro-cracks and high organic matters) in the rock samples to run higher resolution CT scans using a non-destructive interior tomography scans. After this step, we cut the samples and drill 5 mm diameter cores out of the selected ROIs. Then we rescan the samples to measure porosity distribution of the 5 mm cores. We repeat this step for samples with diameter of 1 mm being cut out of the 5 mm cores using a laser cutting machine. After comparing the pore structure and distribution of the samples measured form micro-CT analysis, we move to nano-scale imaging to capture the ultra-fine pores within the shale samples. At this stage, the diameter of the 1 mm samples will be milled down to 70 microns using the laser beam. We scan these samples in a nano-CT Ultra x-ray microscope and calculate the porosity of the samples by image segmentation methods. Finally, we use images collected from focused ion beam scanning electron microscopy (FIB-SEM) to be able to compare the results of porosity measurements from all different imaging techniques. These multi-scale characterization techniques are then compared with traditional analytical techniques such as Mercury Porosimetry.
Numerical sedimentation particle-size analysis using the Discrete Element Method
NASA Astrophysics Data System (ADS)
Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.
2015-12-01
Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.
Yu, Miao; Wei, Chenhui; Niu, Leilei; Li, Shaohua; Yu, Yongjun
2018-01-01
Tensile strength and fracture toughness, important parameters of the rock for engineering applications are difficult to measure. Thus this paper selected three kinds of granite samples (grain sizes = 1.01mm, 2.12mm and 3mm), used the combined experiments of physical and numerical simulation (RFPA-DIP version) to conduct three-point-bending (3-p-b) tests with different notches and introduced the acoustic emission monitor system to analyze the fracture mechanism around the notch tips. To study the effects of grain size on the tensile strength and toughness of rock samples, a modified fracture model was established linking fictitious crack to the grain size so that the microstructure of the specimens and fictitious crack growth can be considered together. The fractal method was introduced to represent microstructure of three kinds of granites and used to determine the length of fictitious crack. It is a simple and novel method to calculate the tensile strength and fracture toughness directly. Finally, the theoretical model was verified by the comparison to the numerical experiments by calculating the nominal strength σn and maximum loads Pmax. PMID:29596422
Yu, Miao; Wei, Chenhui; Niu, Leilei; Li, Shaohua; Yu, Yongjun
2018-01-01
Tensile strength and fracture toughness, important parameters of the rock for engineering applications are difficult to measure. Thus this paper selected three kinds of granite samples (grain sizes = 1.01mm, 2.12mm and 3mm), used the combined experiments of physical and numerical simulation (RFPA-DIP version) to conduct three-point-bending (3-p-b) tests with different notches and introduced the acoustic emission monitor system to analyze the fracture mechanism around the notch tips. To study the effects of grain size on the tensile strength and toughness of rock samples, a modified fracture model was established linking fictitious crack to the grain size so that the microstructure of the specimens and fictitious crack growth can be considered together. The fractal method was introduced to represent microstructure of three kinds of granites and used to determine the length of fictitious crack. It is a simple and novel method to calculate the tensile strength and fracture toughness directly. Finally, the theoretical model was verified by the comparison to the numerical experiments by calculating the nominal strength σn and maximum loads Pmax.
NASA Astrophysics Data System (ADS)
Wells, M. A.; Samarasekera, I. V.; Brimacombe, J. K.; Hawbolt, E. B.; Lloyd, D. J.
1998-06-01
A comprehensive mathematical model of the hot tandem rolling process for aluminum alloys has been developed. Reflecting the complex thermomechanical and microstructural changes effected in the alloys during rolling, the model incorporated heat flow, plastic deformation, kinetics of static recrystallization, final recrystallized grain size, and texture evolution. The results of this microstructural engineering study, combining computer modeling, laboratory tests, and industrial measurements, are presented in three parts. In this Part I, laboratory measurements of static recrystallization kinetics and final recrystallized grain size are described for AA5182 and AA5052 aluminum alloys and expressed quantitatively by semiempirical equations. In Part II, laboratory measurements of the texture evolution during static recrystallization are described for each of the alloys and expressed mathematically using a modified form of the Avrami equation. Finally, Part III of this article describes the development of an overall mathematical model for an industrial aluminum hot tandem rolling process which incorporates the microstructure and texture equations developed and the model validation using industrial data. The laboratory measurements for the microstructural evolution were carried out using industrially rolled material and a state-of-the-art plane strain compression tester at Alcan International. Each sample was given a single deformation and heat treated in a salt bath at 400 °C for various lengths of time to effect different levels of recrystallization in the samples. The range of hot-working conditions used for the laboratory study was chosen to represent conditions typically seen in industrial aluminum hot tandem rolling processes, i.e., deformation temperatures of 350 °C to 500 °C, strain rates of 0.5 to 100 seconds and total strains of 0.5 to 2.0. The semiempirical equations developed indicated that both the recrystallization kinetics and the final recrystallized grain size were dependent on the deformation history of the material i.e., total strain and Zener-Hollomon parameter ( Z), where Z = dot \\varepsilon exp left( {{Q_{def} }/{RT_{def }}} right) and time at the recrystallization temperature.
NASA Astrophysics Data System (ADS)
Kumar, P.; Sokolik, I. N.; Nenes, A.
2011-04-01
This study reports laboratory measurements of particle size distributions, cloud condensation nuclei (CCN) activity, and droplet activation kinetics of wet generated aerosols from clays, calcite, quartz, and desert soil samples from Northern Africa, East Asia/China, and Northern America. The dependence of critical supersaturation, sc, on particle dry diameter, Ddry, is used to characterize particle-water interactions and assess the ability of Frenkel-Halsey-Hill adsorption activation theory (FHH-AT) and Köhler theory (KT) to describe the CCN activity of the considered samples. Regional dust samples produce unimodal size distributions with particle sizes as small as 40 nm, CCN activation consistent with KT, and exhibit hygroscopicity similar to inorganic salts. Clays and minerals produce a bimodal size distribution; the CCN activity of the smaller mode is consistent with KT, while the larger mode is less hydrophilic, follows activation by FHH-AT, and displays almost identical CCN activity to dry generated dust. Ion Chromatography (IC) analysis performed on regional dust samples indicates a soluble fraction that cannot explain the CCN activity of dry or wet generated dust. A mass balance and hygroscopicity closure suggests that the small amount of ions (of low solubility compounds like calcite) present in the dry dust dissolve in the aqueous suspension during the wet generation process and give rise to the observed small hygroscopic mode. Overall these results identify an artifact that may question the atmospheric relevance of dust CCN activity studies using the wet generation method. Based on a threshold droplet growth analysis, wet generated mineral aerosols display similar activation kinetics compared to ammonium sulfate calibration aerosol. Finally, a unified CCN activity framework that accounts for concurrent effects of solute and adsorption is developed to describe the CCN activity of aged or hygroscopic dusts.
Community variations in infant and child mortality in Peru.
Edmonston, B; Andes, N
1983-01-01
Data from the national Peru Fertility Survey are used to estimate infant and childhood mortality ratios, 1968--77, for 124 Peruvian communities, ranging from small Indian hamlets in the Andes to larger cities on the Pacific coast. Significant mortality variations are found: mortality is inversely related to community population size and is higher in the mountains than in the jungle or coast. Multivariate analysis is then used to assess the influence of community population size, average female education, medical facilities, and altitude on community mortality. Finally, this study concludes that large-scale sample surveys, which include maternal birth history, add useful data for epidemiological studies of childhood mortality. PMID:6886581
Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios
NASA Technical Reports Server (NTRS)
Juarez, Alfredo; Harper, Susana A.
2016-01-01
The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.
Advanced hierarchical distance sampling
Royle, Andy
2016-01-01
In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.
Effect of milling atmosphere on structural and magnetic properties of Ni-Zn ferrite nanocrystalline
NASA Astrophysics Data System (ADS)
Hajalilou, Abdollah; Hashim, Mansor; Ebrahimi-Kahrizsangi, Reza; Masoudi Mohamad, Taghi
2015-04-01
Powder mixtures of Zn, NiO, and Fe2O3 are mechanically alloyed by high energy ball milling to produce Ni-Zn ferrite with a nominal composition of Ni0.36Zn0.64Fe2O4. The effects of milling atmospheres (argon, air, and oxygen), milling time (from 0 to 30 h) and heat treatment are studied. The products are characterized using x-ray diffractometry, field emission scanning electron microscopy equipped with energy-dispersive x-ray spectroscopy, and transmitted electron microscopy. The results indicate that the desired ferrite is not produced during the milling in the samples milled under either air or oxygen atmospheres. In those samples milled under argon, however, Zn/NiO/Fe2O3 reacts with a solid-state diffusion mode to produce Ni-Zn ferrite nanocrystalline in a size of 8 nm after 30-h-milling. The average crystallite sizes decrease to 9 nm and 10 nm in 30-h-milling samples under air and oxygen atmospheres, respectively. Annealing the 30-h-milling samples at 600 °C for 2 h leads to the formation of a single phase of Ni-Zn ferrite, an increase of crystallite size, and a reduction of internal lattice strain. Finally, the effects of the milling atmosphere and heating temperature on the magnetic properties of the 30-h-milling samples are investigated. Project supported by the University Putra Malaysia Graduate Research Fellowship Section.
USDA-ARS?s Scientific Manuscript database
This report is part of a project to characterize cotton gin emissions from the standpoint of stack sampling. In 2006, EPA finalized and published a more stringent standard for particulate matter with nominal diameter less than or equal to 2.5 µm (PM2.5). This created an urgent need to collect additi...
Maximizing PTH Anabolic Osteoporosis Therapy
2015-09-01
SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18 . NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC a. REPORT U b. ABSTRACT U c...normalization or endogenous controls and calculates fold 263 changes with P values. Gene expression data were normalized to five endogenous controls ( 18S ...adapters were ligated and the sample was size-fractionated (200-300 bp) on an agarose gel. After a final 294 PCR amplification step ( 18 cycles), the
A random-sum Wilcoxon statistic and its application to analysis of ROC and LROC data.
Tang, Liansheng Larry; Balakrishnan, N
2011-01-01
The Wilcoxon-Mann-Whitney statistic is commonly used for a distribution-free comparison of two groups. One requirement for its use is that the sample sizes of the two groups are fixed. This is violated in some of the applications such as medical imaging studies and diagnostic marker studies; in the former, the violation occurs since the number of correctly localized abnormal images is random, while in the latter the violation is due to some subjects not having observable measurements. For this reason, we propose here a random-sum Wilcoxon statistic for comparing two groups in the presence of ties, and derive its variance as well as its asymptotic distribution for large sample sizes. The proposed statistic includes the regular Wilcoxon rank-sum statistic. Finally, we apply the proposed statistic for summarizing location response operating characteristic data from a liver computed tomography study, and also for summarizing diagnostic accuracy of biomarker data.
Experimental strategies for imaging bioparticles with femtosecond hard X-ray pulses
Daurer, Benedikt J.; Okamoto, Kenta; Bielecki, Johan; ...
2017-04-07
This study explores the capabilities of the Coherent X-ray Imaging Instrument at the Linac Coherent Light Source to image small biological samples. The weak signal from small samples puts a significant demand on the experiment. AerosolizedOmono River virusparticles of ~40 nm in diameter were injected into the submicrometre X-ray focus at a reduced pressure. Diffraction patterns were recorded on two area detectors. The statistical nature of the measurements from many individual particles provided information about the intensity profile of the X-ray beam, phase variations in the wavefront and the size distribution of the injected particles. The results point to amore » wider than expected size distribution (from ~35 to ~300 nm in diameter). This is likely to be owing to nonvolatile contaminants from larger droplets during aerosolization and droplet evaporation. The results suggest that the concentration of nonvolatile contaminants and the ratio between the volumes of the initial droplet and the sample particles is critical in such studies. The maximum beam intensity in the focus was found to be 1.9 × 10 12photons per µm 2per pulse. The full-width of the focus at half-maximum was estimated to be 500 nm (assuming 20% beamline transmission), and this width is larger than expected. Under these conditions, the diffraction signal from a sample-sized particle remained above the average background to a resolution of 4.25 nm. Finally, the results suggest that reducing the size of the initial droplets during aerosolization is necessary to bring small particles into the scope of detailed structural studies with X-ray lasers.« less
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
The Effect of Oat Fibre Powder Particle Size on the Physical Properties of Wheat Bread Rolls
Kurek, Marcin; Wyrwisz, Jarosław; Piwińska, Monika; Wierzbicka, Agnieszka
2016-01-01
Summary In response to the growing interest of modern society in functional food products, this study attempts to develop a bakery product with high dietary fibre content added in the form of an oat fibre powder. Oat fibre powder with particle sizes of 75 µm (OFP1) and 150 µm (OFP2) was used, substituting 4, 8, 12, 16 and 20% of the flour. The physical properties of the dough and the final bakery products were then measured. Results indicated that dough with added fibre had higher elasticity than the control group. The storage modulus values of dough with OFP1 most closely approximated those of the control group. The addition of OFP1 did not affect significantly the colour compared to the other samples. Increasing the proportion of oat fibre powder resulted in increased firmness, which was most prominent in wheat bread rolls with oat fibre powder of smaller particle sizes. The addition of oat fibre powder with smaller particles resulted in a product with the rheological and colour parameters that more closely resembled control sample. PMID:27904392
Closantel nano-encapsulated polyvinyl alcohol (PVA) solutions.
Vega, Abraham Faustino; Medina-Torres, Luis; Calderas, Fausto; Gracia-Mora, Jesus; Bernad-Bernad, MaJosefa
2016-08-01
The influence of closantel on the rheological and physicochemical properties (particle size and by UV-Vis absorption spectroscopy) of PVA aqueous solutions is studied here. About 1% PVA aqueous solutions were prepared by varying the closantel content. The increase of closantel content led to a reduction in the particle size of final solutions. All the solutions were buffered at pH 7.4 and exhibited shear-thinning behavior. Furthermore, in oscillatory flow, a "solid-like" type behavior was observed for the sample containing 30 μg/mL closantel. Indicating a strong interaction between the dispersed and continuous phases and evidencing an interconnected network between the nanoparticle and PVA, this sample also showed the highest shear viscosity and higher shear thinning slope, indicating a more intrincate structure disrupted by shear. In conclusion, PVA interacts with closantel in aqueous solution and the critical concentration for closantel encapsulation by PVA was about 30 μg/mL; above this concentration, the average particle size decreased notoriously which was associated to closantel interacting with the surface of the PVA aggregates and thus avoiding to some extent direct polymer-polymer interaction.
Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai
2014-11-10
Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.
The development of phonological skills in late and early talkers
KEHOE, Margaret; CHAPLIN, Elisa; MUDRY, Pauline; FRIEND, Margaret
2016-01-01
This study examined the relationship between phonological and lexical development in a group of French-speaking children (n=30), aged 29 months. The participants were divided into three sub-groups based on the number of words in their expressive vocabulary : low vocabulary (below the 15th percentile) (<< late-talkers >>) ; average-sized vocabulary (40-60th percentile) (<< middle group >>) and advanced vocabulary (above the 90th percentile) (<< precocious >> or “early talkers”). The phonological abilities (e.g., phonemic inventory, percentage of correct consonants, and phonological processes) of the three groups were compared. The comparison was based on analyses of spontaneous language samples. Most findings were consistent with previous results found in English-speaking children, indicating that the phonological abilities of late talkers are less well developed than those of children with average-sized vocabularies which in turn are less well-developed than those of children with advanced vocabularies. Nevertheless, several phonological measures were not related to vocabulary size, in particular those concerning syllable-final position. These findings differ from those obtained in English. The article finally discusses the clinical implications of the findings for children with delayed language development. PMID:26924855
Zhang, Haiyan; Li, Junbao; Huang, Guangqun; Yang, Zengling; Han, Lujia
2018-05-26
A thorough assessment of the microstructural changes and synergistic effects of hydrothermal and/or ultrafine grinding pretreatment on the subsequent enzymatic hydrolysis of corn stover was performed in this study. The mechanism of pretreatment was elucidated by characterizing the particle size, specific surface area (SSA), pore volume (PV), average pore size, cellulose crystallinity (CrI) and surface morphology of the pretreated samples. In addition, the underlying relationships between the structural parameters and final glucose yields were elucidated, and the relative significance of the factors influencing enzymatic hydrolyzability were assessed by principal component analysis (PCA). Hydrothermal pretreatment at a lower temperature (170 °C) combined with ultrafine grinding achieved a high glucose yield (80.36%) at a low enzyme loading (5 filter paper unit (FPU)/g substrate) which is favorable. The relative significance of structural parameters in enzymatic hydrolyzability was SSA > PV > average pore size > CrI/cellulose > particle size. PV and SSA exhibited logarithmic correlations with the final enzymatic hydrolysis yield. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hargreaves, Andrew J; Vale, Peter; Whelan, Jonathan; Constantino, Carlos; Dotro, Gabriela; Campo, Pablo; Cartmell, Elise
2017-05-01
The distribution of Cu, Pb, Ni and Zn between particulate, colloidal and truly dissolved size fractions in wastewater from a trickling filter treatment plant was investigated. Samples of influent, primary effluent, humus effluent, final effluent and sludge holding tank returns were collected and separated into particulate (i.e. > 0.45 μm), colloidal (i.e. 1 kDa to 0.45 μm), and truly dissolved (i.e. < 1 kDa) fractions using membrane filters. In the influent, substantial proportions of Cu (60%), Pb (67%), and Zn (32%) were present in the particulate fraction which was removed in conjunction with suspended particles at the works in subsequent treatment stages. In final effluent, sizeable proportions of Cu (52%), Pb (32%), Ni (44%) and Zn (68%) were found within the colloidal size fraction. Calculated ratios of soluble metal to organic carbon suggest the metal to be adsorbed to or complexed with non-humic macromolecules typically found within the colloidal size range. These findings suggest that technologies capable of removing particles within the colloidal fraction have good potential to enhance metals removal from wastewater. Copyright © 2017 Elsevier Ltd. All rights reserved.
The late Neandertal supraorbital fossils from Vindija Cave, Croatia: a biased sample?
Ahern, James C M; Lee, Sang-Hee; Hawks, John D
2002-09-01
The late Neandertal sample from Vindija (Croatia) has been described as transitional between the earlier Central European Neandertals from Krapina (Croatia) and modern humans. However, the morphological differences indicating this transition may rather be the result of different sex and/or age compositions between the samples. This study tests the hypothesis that the metric differences between the Krapina and Vindija supraorbital samples are due to sampling bias. We focus upon the supraorbital region because past studies have posited this region as particularly indicative of the Vindija sample's transitional nature. Furthermore, the supraorbital region varies significantly with both age and sex. We analyzed four chords and two derived indices of supraorbital torus form as defined by Smith & Ranyard (1980, Am. J. phys. Anthrop.93, pp. 589-610). For each variable, we analyzed relative sample bias of the Krapina and Vindija samples using three sampling methods. In order to test the hypothesis that the Vindija sample contains an over-representation of females and/or young while the Krapina sample is normal or also female/young biased, we determined the probability of drawing a sample of the same size as and with a mean equal to or less than Vindija's from a Krapina-based population. In order to test the hypothesis that the Vindija sample is female/young biased while the Krapina sample is male/old biased, we determined the probability of drawing a sample of the same size as and with a mean equal or less than Vindija's from a generated population whose mean is halfway between Krapina's and Vindija's. Finally, in order to test the hypothesis that the Vindija sample is normal while the Krapina sample contains an over-representation of males and/or old, we determined the probability of drawing a sample of the same size as and with a mean equal to or greater than Krapina's from a Vindija-based population. Unless we assume that the Vindija sample is female/young and the Krapina sample is male/old biased, our results falsify the hypothesis that the metric differences between the Krapina and Vindija samples are due to sample bias.
Body mass estimates of hominin fossils and the evolution of human body size.
Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G
2015-08-01
Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.
“What Women Like”: Influence of Motion and Form on Esthetic Body Perception
Cazzato, Valentina; Siega, Serena; Urgesi, Cosimo
2012-01-01
Several studies have shown the distinct contribution of motion and form to the esthetic evaluation of female bodies. Here, we investigated how variations of implied motion and body size interact in the esthetic evaluation of female and male bodies in a sample of young healthy women. Participants provided attractiveness, beauty, and liking ratings for the shape and posture of virtual renderings of human bodies with variable body size and implied motion. The esthetic judgments for both shape and posture of human models were influenced by body size and implied motion, with a preference for thinner and more dynamic stimuli. Implied motion, however, attenuated the impact of extreme body size on the esthetic evaluation of body postures, while body size variations did not affect the preference for more dynamic stimuli. Results show that body form and action cues interact in esthetic perception, but the final esthetic appreciation of human bodies is predicted by a mixture of perceptual and affective evaluative components. PMID:22866044
System-size convergence of point defect properties: The case of the silicon vacancy
NASA Astrophysics Data System (ADS)
Corsetti, Fabiano; Mostofi, Arash A.
2011-07-01
We present a comprehensive study of the vacancy in bulk silicon in all its charge states from 2+ to 2-, using a supercell approach within plane-wave density-functional theory, and systematically quantify the various contributions to the well-known finite size errors associated with calculating formation energies and stable charge state transition levels of isolated defects with periodic boundary conditions. Furthermore, we find that transition levels converge faster with respect to supercell size when only the Γ-point is sampled in the Brillouin zone, as opposed to a dense k-point sampling. This arises from the fact that defect level at the Γ-point quickly converges to a fixed value which correctly describes the bonding at the defect center. Our calculated transition levels with 1000-atom supercells and Γ-point only sampling are in good agreement with available experimental results. We also demonstrate two simple and accurate approaches for calculating the valence band offsets that are required for computing formation energies of charged defects, one based on a potential averaging scheme and the other using maximally-localized Wannier functions (MLWFs). Finally, we show that MLWFs provide a clear description of the nature of the electronic bonding at the defect center that verifies the canonical Watkins model.
Crack surface roughness in three-dimensional random fuse networks
NASA Astrophysics Data System (ADS)
Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan
2006-08-01
Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.
Vasylkiv, Oleg; Borodianska, Hanna; Badica, Petre; Zhen, Yongda; Tok, Alfred
2009-01-01
Four-cation nanograined strontium and magnesium doped lanthanum gallate (La0.8Sr0.2) (Ga0.9Mg0.1)O(3-delta) (LSGM) and its composite with 2 wt% of ceria (LSGM-Ce) were prepared. Morphologically homogeneous nanoreactors, i.e., complex intermediate metastable aggregates of desired composition were assembled by spray atomization technique, and subsequently loaded with nanoparticles of highly energetic C3H6N6O6. Rapid nanoblast calcination technique was applied and the final composition was synthesized within the preliminary localized volumes of each single nanoreactor on the first step of spark plasma treatment. Subsequent SPS consolidations of nanostructured extremely active LSGM and LSGM-Ce powders were achieved by rapid treatment under pressures of 90-110 MPa. This technique provided the heredity of the final structure of nanosize multimetal oxide, allowed the prevention of the uncontrolled agglomeration during multicomponent aggregates assembling, subsequent nanoblast calcination, and final ultra-rapid low-temperature SPS consolidation of nanostructured ceramics. LaSrGaMgCeO(3-delta) nanocrystalline powder consisting of approximately 11 nm crystallites was consolidated to LSGM-Ce nanoceramic with average grain size of approximately 14 nm by low-temperature SPS at 1250 degrees C. Our preliminary results indicate that nanostructured samples of (La0.8Sr0.2)(Ga0.9Mg0.1)O(3-delta) with 2 wt% of ceria composed of approximataley 14 nm grains can exhibit giant magnetoresistive effect in contrast to the usual paramagnetic properties measured on the samples with larger grain size.
NASA Astrophysics Data System (ADS)
Yamada, T.; Ide, S.
2007-12-01
Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.
Characterisation of Fine Ash Fractions from the AD 1314 Kaharoa Eruption
NASA Astrophysics Data System (ADS)
Weaver, S. J.; Rust, A.; Carey, R. J.; Houghton, B. F.
2012-12-01
The AD 1314±12 yr Kaharoa eruption of Tarawera volcano, New Zealand, produced deposits exhibiting both plinian and subplinian characteristics (Nairn et al., 2001; 2004, Leonard et al., 2002, Hogg et al., 2003). Their widespread dispersal yielded volumes, column heights, and mass discharge rates of plinian magnitude and intensity (Sahetapy-Engel, 2002); however, vertical shifts in grain size suggest waxing and waning within single phases and time-breaks on the order of hours between phases. These grain size shifts were quantified using sieve, laser diffraction, and image analysis of the fine ash fractions (<1 mm in diameter) of some of the most explosive phases of the eruption. These analyses served two purposes: 1) to characterise the change in eruption intensity over time, and 2) to compare the three methods of grain size analysis. Additional analyses of the proportions of components and particle shape were also conducted to aid in the interpretation of the eruption and transport dynamics. 110 samples from a single location about 6 km from source were sieved at half phi intervals between -4φ to 4φ (16 mm - 63 μm). A single sample was then chosen to test the range of grain sizes to run through the Mastersizer 2000. Three aliquots were tested; the first consisted of each sieve size fraction ranging between 0φ (1000 μm) and <4φ (<63 μm, i.e. the pan). For example, 0, 0.5, 1, …, 4φ, and the pan were ran through the Mastersizer and then their results, weighted according to their sieve weight percents, were summed together to produce a total distribution. The second aliquot included 3 samples ranging between 0-2φ (1000-250 μm), 2.5-4φ (249-63 μm), and the pan. A single sample consisting of the total range of grain sizes between 0φ and the pan was used for the final aliquot. Their results were compared and it was determined that the single sample consisting of the broadest range of grain sizes yielded an accurate grain size distribution. This data was then compared with the sieve weight percent data, and revealed that there is a significant difference in size characterisation between sieving and the Mastersizer for size fractions between 0-3φ (1000-125 μm). This is due predominantly to the differing methods that sieving and the Mastersizer use to characterise a single particle, to inhomogeneity in grain density in each grain-size fraction, and to grain-shape irregularities. This led the Mastersizer to allocate grains from a certain sieve size fraction into coarser size fractions. Therefore, only the Mastersizer data from 3.5φ and below were combined with the coarser sieve data to yield total grain size distributions. This high-resolution analysis of the grain size data enabled subtle trends in grain size to be identified and related to short timescale eruptive processes.
Lahey, Joanna N.; Beasley, Ryan A.
2014-01-01
This paper briefly discusses the history, benefits, and shortcomings of traditional audit field experiments to study market discrimination. Specifically it identifies template bias and experimenter bias as major concerns in the traditional audit method, and demonstrates through an empirical example that computerization of a resume or correspondence audit can efficiently increase sample size and greatly mitigate these concerns. Finally, it presents a useful meta-tool that future researchers can use to create their own resume audits. PMID:24904189
Eddy Covariance Measurements of the Sea-Spray Aerosol Flu
NASA Astrophysics Data System (ADS)
Brooks, I. M.; Norris, S. J.; Yelland, M. J.; Pascal, R. W.; Prytherch, J.
2015-12-01
Historically, almost all estimates of the sea-spray aerosol source flux have been inferred through various indirect methods. Direct estimates via eddy covariance have been attempted by only a handful of studies, most of which measured only the total number flux, or achieved rather coarse size segregation. Applying eddy covariance to the measurement of sea-spray fluxes is challenging: most instrumentation must be located in a laboratory space requiring long sample lines to an inlet collocated with a sonic anemometer; however, larger particles are easily lost to the walls of the sample line. Marine particle concentrations are generally low, requiring a high sample volume to achieve adequate statistics. The highly hygroscopic nature of sea salt means particles change size rapidly with fluctuations in relative humidity; this introduces an apparent bias in flux measurements if particles are sized at ambient humidity. The Compact Lightweight Aerosol Spectrometer Probe (CLASP) was developed specifically to make high rate measurements of aerosol size distributions for use in eddy covariance measurements, and the instrument and data processing and analysis techniques have been refined over the course of several projects. Here we will review some of the issues and limitations related to making eddy covariance measurements of the sea spray source flux over the open ocean, summarise some key results from the last decade, and present new results from a 3-year long ship-based measurement campaign as part of the WAGES project. Finally we will consider requirements for future progress.
Contado, Catia; Dalpiaz, Alessandro; Leo, Eliana; Zborowski, Maciej; Williams, P. Stephen
2009-01-01
Poly(lactic acid) nanoparticles were synthesized using a modified evaporation method, testing two different surfactants (sodium cholate and Pluronic F68) for the process. During their formulation the prodrug 5′-octanoyl-CPA (Oct-CPA) of the antiischemic N6-cyclopentyladenosine (CPA) was encapsulated. Three different purification methods were compared with respect to the influence of surfactant on the size characteristics of the final nanoparticle product. Flow and sedimentation field-flow fractionation techniques (FlFFF and SdFFF, respectively) were used to size characterize the five poly(lactic acid) particle samples. Two different combinations of carrier solution (mobile phase) were employed in the FlFFF analyses, while a solution of poly(vinyl alcohol) was used as mobile phase for the SdFFF runs. The separation performances of the two techniques were compared and the particle size distributions, derived from the fractograms, were interpreted with the support of observations by scanning electron microscopy. Some critical aspects, such as the carrier choice and the channel thickness determination for the FlFFF, have been investigated. This is the first comprehensive comparison of the two FFF techniques for characterizing non standard particulate materials. The two FFF techniques proved to be complementary and gave good, congruent and very useful information on the size distributions of the five poly(lactic acid) particle samples. PMID:17482199
Aoki, Kenichi
2018-04-05
In apparent contradiction to the theoretically predicted effect of population size on the quality/quantity of material culture, statistical analyses on ethnographic hunter-gatherers have shown an absence of correlation between population size and toolkit size. This has sparked a heated, if sometimes tangential, debate as to the usefulness of the theoretical models and as to what modes of cultural transmission humans are capable of and hunter-gatherers rely on. I review the directly relevant theoretical literature and argue that much of the confusion is caused by a mismatch between the theoretical variable and the empirical observable. I then confirm that a model incorporating the appropriate variable does predict a positive association between population size and toolkit size for random oblique, vertical, best-of- K , conformist, anticonformist, success bias and one-to-many cultural transmission, with the caveat that for all populations sampled, the population size has remained constant and toolkit size has reached the equilibrium for this population size. Finally, I suggest three theoretical scenarios, two of them involving variable population size, that would attenuate or eliminate this association and hence help to explain the empirical absence of correlation.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).
Characterization of Inclusion Populations in Mn-Si Deoxidized Steel
NASA Astrophysics Data System (ADS)
García-Carbajal, Alfonso; Herrera-Trejo, Martín; Castro-Cedeño, Edgar-Ivan; Castro-Román, Manuel; Martinez-Enriquez, Arturo-Isaias
2017-12-01
Four plant heats of Mn-Si deoxidized steel were conducted to follow the evolution of the inclusion population through ladle furnace (LF) treatment and subsequent vacuum treatment (VT). The liquid steel was sampled, and the chemical composition and size distribution of the inclusion populations were characterized. The Gumbel generalized extreme-value (GEV) and generalized Pareto (GP) distributions were used for the statistical analysis of the inclusion size distributions. The inclusions found at the beginning of the LF treatment were mostly fully liquid SiO2-Al2O3-MnO inclusions, which then evolved into fully liquid SiO2-Al2O3-CaO-MgO and partly liquid SiO2-CaO-MgO-(Al2O3-MgO) inclusions detected at the end of the VT. The final fully liquid inclusions had a desirable chemical composition for plastic behavior in subsequent metallurgical operations. The GP distribution was found to be undesirable for statistical analysis. The GEV distribution approach led to shape parameter values different from the zero value hypothesized from the Gumbel distribution. According to the GEV approach, some of the final inclusion size distributions had statistically significant differences, whereas the Gumbel approach predicted no statistically significant differences. The heats were organized according to indicators of inclusion cleanliness and a statistical comparison of the size distributions.
Effect of Heat and Laser Treatment on Cu2S Thin Film Sprayed on Polyimide Substrate
NASA Astrophysics Data System (ADS)
Magdy, Wafaa; Mahmoud, Fawzy A.; Nassar, Amira H.
2018-02-01
Three samples of copper sulfide Cu2S thin film were deposited on polyimide substrate by spray pyrolysis using deposition temperature of 400°C and deposition time of about 45 min. One of the samples was left as deposited, another was heat treated, while the third was laser treated. The structural, surface morphological, optical, mechanical, and electrical properties of the films were investigated. X-ray diffraction (XRD) analysis showed that the copper sulfide films were close to copper-rich phase (Cu2S). Increased crystallite size after heat and laser treatment was confirmed by XRD analysis and scanning electron microscopy. Vickers hardness measurements showed that the samples' hardness values were enhanced with increasing crystallite size, representing an inverse Hall-Petch (H-P) effect. The calculated optical bandgap of the treated films was lower than that of the deposited film. Finally, it was found that both heat and laser treatment enhanced the physical properties of the sprayed Cu2S films on polyimide substrate for use in solar energy applications.
Determination of the origin and texture of marble artifacts using stable isotopes
NASA Astrophysics Data System (ADS)
Dotsika, E.; Poutoukis, D.; Zisi, N.; Psomiadis, D.
2009-04-01
For the characterization of marble and the identification of the origin of marble artifacts, samples from several ancient monuments of Greece were analyzed using several techniques: stable isotopes of carbonates (13C, 18O), XRD analysis and optical microscopy, from which information can be obtained on the origin and texture of the marble used for the production of the artifacts. The full range of grain sizes and isotopic signatures that occur in a lot of different quarries has been measured and presented. In a δ13C versus δ18O diagram, the fields corresponding to all known ancient quarries (from Penteli, Cyclades, especially Naxos (Mela, Apol, Apir, Senax), Keros, Paros (Parlak, Parlyc) and Asia Minor (Prokon)) are reported. The plots representing the analyzed samples are also shown on the same diagram. The final results of the study indicate the origin of the carbonate material of the artefacts from each of the ancient monument. In cases that the samples plot on overlapping areas, a further study is proposed, using the maximum grain size of the material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbold, E. B.; Walton, O.; Homel, M. A.
2015-10-26
This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-4 weeks of an FTE split amongst two staff scientists and one post-doc. The DEM simulationsmore » emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-particles square by 10-particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.« less
Gibb-Snyder, Emily; Gullett, Brian; Ryan, Shawn; Oudejans, Lukas; Touati, Abderrahmane
2006-08-01
Size-selective sampling of Bacillus anthracis surrogate spores from realistic, common aerosol mixtures was developed for analysis by laser-induced breakdown spectroscopy (LIBS). A two-stage impactor was found to be the preferential sampling technique for LIBS analysis because it was able to concentrate the spores in the mixtures while decreasing the collection of potentially interfering aerosols. Three common spore/aerosol scenarios were evaluated, diesel truck exhaust (to simulate a truck running outside of a building air intake), urban outdoor aerosol (to simulate common building air), and finally a protein aerosol (to simulate either an agent mixture (ricin/anthrax) or a contaminated anthrax sample). Two statistical methods, linear correlation and principal component analysis, were assessed for differentiation of surrogate spore spectra from other common aerosols. Criteria for determining percentages of false positives and false negatives via correlation analysis were evaluated. A single laser shot analysis of approximately 4 percent of the spores in a mixture of 0.75 m(3) urban outdoor air doped with approximately 1.1 x 10(5) spores resulted in a 0.04 proportion of false negatives. For that same sample volume of urban air without spores, the proportion of false positives was 0.08.
NASA Astrophysics Data System (ADS)
Semiatin, S. L.; Shank, J. M.; Shiveley, A. R.; Saurber, W. M.; Gaussa, E. F.; Pilchak, A. L.
2014-12-01
The effect of subsolvus forging temperature and strain rate on the grain size developed during final supersolvus heat treatment (SSHT) of two powder-metallurgy, gamma-gamma prime superalloys, IN-100 and LSHR, was established. For this purpose, isothermal, hot compression tests were performed at temperatures ranging from 1144 K (871 °C) and 22 K (22 °C) below the respective gamma-prime solvus temperatures ( T γ') and strain rates between 0.0003 and 10 s-1. Deformed samples were then heat treated 20 K (20 °C) above the solvus for 1 h with selected additional samples exposed for shorter and longer times. For both alloys, the grain size developed during SSHT was in the range of 15 to 30 μm, except for those processing conditions consisting of pre-deformation at the highest temperature, i.e., T γ'—22 K ( T γ'—22 °C), and strain rates in the range of ~0.001 to 0.1 s-1. In these latter instances, the heat-treated grain size was approx. four times as large. The observations were interpreted in terms of the mechanisms of deformation during hot working and their effect on the driving forces for grain-boundary migration which controls the evolution of the gamma-grain size.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Boning; Herbold, Eric B.; Homel, Michael A.
2015-12-01
An adaptive particle fracture model in poly-ellipsoidal Discrete Element Method is developed. The poly-ellipsoidal particle will break into several sub-poly-ellipsoids by Hoek-Brown fracture criterion based on continuum stress and the maximum tensile stress in contacts. Also Weibull theory is introduced to consider the statistics and size effects on particle strength. Finally, high strain-rate split Hopkinson pressure bar experiment of silica sand is simulated using this newly developed model. Comparisons with experiments show that our particle fracture model can capture the mechanical behavior of this experiment very well, both in stress-strain response and particle size redistribution. The effects of density andmore » packings o the samples are also studied in numerical examples.« less
Ranking metrics in gene set enrichment analysis: do they matter?
Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna
2017-05-12
There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.
Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N
2016-06-01
When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.
VizieR Online Data Catalog: SAMI Galaxy Survey: gas streaming (Cecil+, 2016)
NASA Astrophysics Data System (ADS)
Cecil, G.; Fogarty, L. M. R.; Richards, S.; Bland-Hawthorn, J.; Lange, R.; Moffett, A.; Catinella, B.; Cortese, L.; Ho, I.-T.; Taylor, E. N.; Bryant, J. J.; Allen, J. T.; Sweet, S. M.; Croom, S. M.; Driver, S. P.; Goodwin, M.; Kelvin, L.; Green, A. W.; Konstantopoulos, I. S.; Owers, M. S.; Lawrence, J. S.; Lorente, N. P. F.
2016-08-01
From the first ~830 targets observed in the SGS, we selected 344 rotationally supported galaxies having enough gas to map their CSC. We rejected 8 whose inclination angle to us is too small (i<20°) to be established reliably by photometry, and those very strongly barred or in obvious interactions. Finally, we rejected those whose CSC would be smeared excessively by our PSF (Sect. 2.3.1) because of large inclination (i>71°), compact size, or observed in atrocious conditions, leaving 163 SGS GAMA survey sub-sample and 15 "cluster" sub-sample galaxies with discs. (3 data files).
Scaling ice microstructures from the laboratory to nature: cryo-EBSD on large samples.
NASA Astrophysics Data System (ADS)
Prior, David; Craw, Lisa; Kim, Daeyeong; Peyroux, Damian; Qi, Chao; Seidemann, Meike; Tooley, Lauren; Vaughan, Matthew; Wongpan, Pat
2017-04-01
Electron backscatter diffraction (EBSD) has extended significantly our ability to conduct detailed quantitative microstructural investigations of rocks, metals and ceramics. EBSD on ice was first developed in 2004. Techniques have improved significantly in the last decade and EBSD is now becoming more common in the microstructural analysis of ice. This is particularly true for laboratory-deformed ice where, in some cases, the fine grain sizes exclude the possibility of using a thin section of the ice. Having the orientations of all axes (rather than just the c-axis as in an optical method) yields important new information about ice microstructure. It is important to examine natural ice samples in the same way so that we can scale laboratory observations to nature. In the case of ice deformation, higher strain rates are used in the laboratory than those seen in nature. These are achieved by increasing stress and/or temperature and it is important to assess that the microstructures produced in the laboratory are comparable with those observed in nature. Natural ice samples are coarse grained. Glacier and ice sheet ice has a grain size from a few mm up to several cm. Sea and lake ice has grain sizes of a few cm to many metres. Thus extending EBSD analysis to larger sample sizes to include representative microstructures is needed. The chief impediments to working on large ice samples are sample exchange, limitations on stage motion and temperature control. Large ice samples cannot be transferred through a typical commercial cryo-transfer system that limits sample sizes. We transfer through a nitrogen glove box that encloses the main scanning electron microscope (SEM) door. The nitrogen atmosphere prevents the cold stage and the sample from becoming covered in frost. Having a long optimal working distance for EBSD (around 30mm for the Otago cryo-EBSD facility) , by moving the camera away from the pole piece, enables the stage to move without crashing into either the EBSD camera or the SEM pole piece (final lens). In theory a sample up to 100mm perpendicular to the tilt axis by 150mm parallel to the tilt axis can be analysed. In practice, the motion of our stage is restricted to maximum dimensions of 100 by 50mm by a conductive copper braid on our cold stage. Temperature control becomes harder as the samples become larger. If the samples become too warm then they will start to sublime and the quality of EBSD data will reduce. Large samples need to be relatively thin ( 5mm or less) so that conduction of heat to the cold stage is more effective at keeping the surface temperature low. In the Otago facility samples of up to 40mm by 40mm present little problem and can be analysed for several hours without significant sublimation. Larger samples need more care, e.g. fast sample transfer to keep the sample very cold. The largest samples we work on routinely are 40 by 60mm in size. We will show examples of EBSD data from glacial ice and sea ice from Antarctica and from large laboratory ice samples.
Development of a multichannel hyperspectral imaging probe for food property and quality assessment
NASA Astrophysics Data System (ADS)
Huang, Yuping; Lu, Renfu; Chen, Kunjie
2017-05-01
This paper reports on the development, calibration and evaluation of a new multipurpose, multichannel hyperspectral imaging probe for property and quality assessment of food products. The new multichannel probe consists of a 910 μm fiber as a point light source and 30 light receiving fibers of three sizes (i.e., 50 μm, 105 μm and 200 μm) arranged in a special pattern to enhance signal acquisitions over the spatial distances of up to 36 mm. The multichannel probe allows simultaneous acquisition of 30 spatially-resolved reflectance spectra of food samples with either flat or curved surface over the spectral region of 550-1,650 nm. The measured reflectance spectra can be used for estimating the optical scattering and absorption properties of food samples, as well as for assessing the tissues of the samples at different depths. Several calibration procedures that are unique to this probe were carried out; they included linearity calibrations for each channel of the hyperspectral imaging system to ensure consistent linear responses of individual channels, and spectral response calibrations of individual channels for each fiber size group and between the three groups of different size fibers. Finally, applications of this new multichannel probe were demonstrated through the optical property measurement of liquid model samples and tomatoes of different maturity levels. The multichannel probe offers new capabilities for optical property measurement and quality detection of food and agricultural products.
Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction
Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; Coe, Jesse; Conrad, Chelsie E.; Dörner, Katerina; Sierra, Raymond G.; Stevenson, Hilary P.; Camacho-Alanis, Fernanda; Grant, Thomas D.; Nelson, Garrett; James, Daniel; Calero, Guillermo; Wachter, Rebekka M.; Spence, John C. H.; Weierstall, Uwe; Fromme, Petra; Ros, Alexandra
2015-01-01
The advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles can be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ∼4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. This method will also permit an analysis of the dependence of crystal quality on crystal size. PMID:26798818
Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction
Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; ...
2015-08-19
We report that the advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles canmore » be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ~4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. Ultimately, this method will also permit an analysis of the dependence of crystal quality on crystal size.« less
2011-01-01
Background The relationship between urbanicity and adolescent health is a critical issue for which little empirical evidence has been reported. Although an association has been suggested, a dichotomous rural versus urban comparison may not succeed in identifying differences between adolescent contexts. This study aims to assess the influence of locality size on risk behaviors in a national sample of young Mexicans living in low-income households, while considering the moderating effect of socioeconomic status (SES). Methods This is a secondary analysis of three national surveys of low-income households in Mexico in different settings: rural, semi-urban and urban areas. We analyzed risk behaviors in 15-21-year-olds and their potential relation to urbanicity. The risk behaviors explored were: tobacco and alcohol consumption, sexual initiation and condom use. The adolescents' localities of residence were classified according to the number of inhabitants in each locality. We used a logistical model to identify an association between locality size and risk behaviors, including an interaction term with SES. Results The final sample included 17,974 adolescents from 704 localities in Mexico. Locality size was associated with tobacco and alcohol consumption, showing a similar effect throughout all SES levels: the larger the size of the locality, the lower the risk of consuming tobacco or alcohol compared with rural settings. The effect of locality size on sexual behavior was more complex. The odds of adolescent condom use were higher in larger localities only among adolescents in the lowest SES levels. We found no statically significant association between locality size and sexual initiation. Conclusions The results suggest that in this sample of adolescents from low-income areas in Mexico, risk behaviors are related to locality size (number of inhabitants). Furthermore, for condom use, this relation is moderated by SES. Such heterogeneity suggests the need for more detailed analyses of both the effects of urbanicity on behavior, and the responses--which are also heterogeneous--required to address this situation. PMID:22129110
NASA Astrophysics Data System (ADS)
Nenes, A.; Medina, J.; Cottrell, L.; Griffin, R.
2005-12-01
Ground measurements of cloud condensation nuclei (CCN) were made during July and August of 2004 as part of the NEAQS ITCT-2K4 (New England Air Quality Study - Intercontinental Transport and Chemical Transformation 2004) mission at the Thompson Farm sampling site maintained by the University of New Hampshire. Over the duration of the field campaign, the two CCN instruments (built by Droplet Measurement Technologies, Inc.) were used to measure the concentration of CCN at 0.1, 0.2, 0.3, 0.37, 0.4, 0.5 and 0.6% supersaturation continuously over extended periods of time. One of the CCN instruments sampled unclassified ambient aerosol and the other was operated in our newly developed "Scanning Mobility CCN Analysis" technique (in which classified ambient aerosol obtained from a scanning DMA is introduced into the CCN counter), which allows the rapid characterization of the activation properties of classified ambient aerosol. Aerosol size distributions were measured using a TSI scanning mobility particle sizer (SMPS 3080). Finally, an Aerodyne Aerosol Mass Spectrometer (AMS) operated by the University of New Hampshire was used to measure the size-resolved chemical composition of the aerosol. We analyze the measurements using detailed numerical models of the CCN instrumentation. By close integration of measurements and theory, CCN closure can be assessed and real-time observations of CCN mixing state, ageing and droplet growth kinetics can be obtained. Finally, we derive characteristic aggregate properties for the carbonaceous component of the CCN, and discuss how this information can be introduced into aerosol-cloud interaction modules for GCM assessments of the aerosol indirect effect.
The counting of native blood cells by digital microscopy
NASA Astrophysics Data System (ADS)
Torbin, S. O.; Doubrovski, V. A.; Zabenkov, I. V.; Tsareva, O. E.
2017-03-01
An algorithm for photographic images processing of blood samples in its native state was developed to determine the concentration of erythrocytes, leukocytes and platelets without individual separate preparation of cells' samples. Special "photo templates" were suggested to use in order to identify red blood cells. The effect of "highlighting" of leukocytes, which was found by authors, was used to increase the accuracy of this type of cells counting. Finally to raise the resolution of platelets from leukocytes the areas of their photo images were used, but not their sizes. It is shown that the accuracy of cells counting for native blood samples may be comparable with the accuracy of similar studies for smears. At the same time the proposed native blood analysis simplifies greatly the procedure of sample preparation in comparison to smear, permits to move from the detection of blood cells ratio to the determination of their concentrations in the sample.
Physical characterization of whole and skim dried milk powders.
Pugliese, Alessandro; Cabassi, Giovanni; Chiavaro, Emma; Paciulli, Maria; Carini, Eleonora; Mucchetti, Germano
2017-10-01
The lack of updated knowledge about the physical properties of milk powders aimed us to evaluate selected physical properties (water activity, particle size, density, flowability, solubility and colour) of eleven skim and whole milk powders produced in Europe. These physical properties are crucial both for the management of milk powder during the final steps of the drying process, and for their use as food ingredients. In general, except for the values of water activity, the physical properties of skim and whole milk powders are very different. Particle sizes of the spray-dried skim milk powders, measured as volume and surface mean diameter were significantly lower than that of the whole milk powders, while the roller dried sample showed the largest particle size. For all the samples the size distribution was quite narrow, with a span value less than 2. The loose density of skim milk powders was significantly higher than whole milk powders (541.36 vs 449.75 kg/m 3 ). Flowability, measured by Hausner ratio and Carr's index indicators, ranged from passable to poor when evaluated according to pharmaceutical criteria. The insolubility index of the spray-dried skim and whole milk powders, measured as weight of the sediment (from 0.5 to 34.8 mg), allowed a good discrimination of the samples. Colour analysis underlined the relevant contribution of fat content and particle size, resulted in higher lightness ( L *) for skim milk powder than whole milk powder, which, on the other hand, showed higher yellowness ( b *) and lower greenness (- a *). In conclusion a detailed knowledge of functional properties of milk powders may allow the dairy to tailor the products to the user and help the food processor to perform a targeted choice according to the intended use.
Influence of Microstructural Disorder and Wavefield in Dynamic Fracture
NASA Astrophysics Data System (ADS)
Alizee, D.; Bonamy, D.
2017-12-01
Dynamic fracture and its instabilities have been widely studied but the influence of the finite sample size and subsequent 3D aspects are generally neglected. However, a sample of a few centimeter is a waveguide for the elastodynamic field emitted by the propagating crack front (from 100kHz to a few GHz): It excites the sample's free oscillations (or normal modes), and creates a fluctuating landscape of elastic energy. This may be seen as an effective noise, with an amplitude proportional to the frequency of a given mode, which can reach the same order of magnitude as that of the fracture toughness (In PMMA: 103 J.m-2 for f ˜ MHz). We designed an experiment to evidence this effect in a homogeneous brittle material (PMMA) and subsequently to characterize the possible coupling between the fracture front and its wavefield. Dynamic cracks are driven by means of a wedge splitting geometry which allow us to modulate, over a wide range, the velocity of the crack tip. Spatial geometry and frequency content of the emitted wavefield are modulated by adjusting the geometry of the sample and the loading conditions. Hints of the wavefield are looked in the high-frequency fluctuations of the crack speed, measured on both sides of the specimen via a state-of-the art potential drop method. Fractography and statistical analysis of the post-mortem fracture surfaces are used to characterize the mesoscale/microstructure scale response of the crack front to the wavefield. Experiments performed in PMMA will finally be compared to others performed on heterogeneous materials with controlled defects size (40 - 500µm). This study will permit (i) to shed light on the key role of elastic wavefield in dynamic fracture, and how those are selected by the sample geometry and microstructure and finally and (ii) to give some leads on how to account for these effects by adapting the paradigm of interface growth model to the case of dynamic fracture.
Faraji Khiavi, F; Amiri, E; Ghobadian, S; Roshankar, R
2015-01-01
Background: Increasing nurses’ motivation is among the most important and complex nursing duties. Performance evaluation system could be used as a means to improve the quantity and quality of the human resources. Therefore, current research objected to evaluate the effect of final evaluation on job motivation from the perspective of nurses in Ahvaz hospitals according to Herzberg scheme. Methods: This investigation conducted in 2012. Research population included nurses in Ahvaz educational hospitals. The sample size was calculated 120 and sampling was performed based on classification and random sampling. Research instrument was a self-made questionnaire with confirmed validity through content analysis and Cronbach’s alpha calculated at 0.94. Data examined utilizing ANOVA, T-Test, and descriptive statistics. Results: The nurses considered the final evaluation on management policy (3.2 ± 1.11) and monitoring (3.15 ± 1.15) among health items and responsibility (3.15 ± 1.15) and progress (3.06 ± 1.24) among motivational factors relatively effective. There was a significant association between scores of nurses' views in different age and sex groups (P = 0.01), but there was no significant association among respondents in educational level and marital status. Conclusion: Experienced nurses believed that evaluation has little effect on job motivation. If annual assessment of the various job aspects are considered, managers could use it as an efficient tool to motivate nurses. PMID:28316733
Faraji Khiavi, F; Amiri, E; Ghobadian, S; Roshankar, R
2015-01-01
Background: Increasing nurses' motivation is among the most important and complex nursing duties. Performance evaluation system could be used as a means to improve the quantity and quality of the human resources. Therefore, current research objected to evaluate the effect of final evaluation on job motivation from the perspective of nurses in Ahvaz hospitals according to Herzberg scheme. Methods: This investigation conducted in 2012. Research population included nurses in Ahvaz educational hospitals. The sample size was calculated 120 and sampling was performed based on classification and random sampling. Research instrument was a self-made questionnaire with confirmed validity through content analysis and Cronbach's alpha calculated at 0.94. Data examined utilizing ANOVA, T-Test, and descriptive statistics. Results: The nurses considered the final evaluation on management policy (3.2 ± 1.11) and monitoring (3.15 ± 1.15) among health items and responsibility (3.15 ± 1.15) and progress (3.06 ± 1.24) among motivational factors relatively effective. There was a significant association between scores of nurses' views in different age and sex groups (P = 0.01), but there was no significant association among respondents in educational level and marital status. Conclusion: Experienced nurses believed that evaluation has little effect on job motivation. If annual assessment of the various job aspects are considered, managers could use it as an efficient tool to motivate nurses.
Denagamage, Thomas N; Patterson, Paul; Wallner-Pendleton, Eva; Trampel, Darrell; Shariat, Nikki; Dudley, Edward G; Jayarao, Bhushan M; Kariyawasam, Subhashinie
2016-11-01
The Pennsylvania Egg Quality Assurance Program (EQAP) provided the framework for Salmonella Enteritidis (SE) control programs, including the Food and Drug Administration (FDA) mandated Final Egg Rule, for commercial layer facilities throughout the United States. Although flocks with ≥3000 birds must comply with the FDA Final Egg Rule, smaller flocks are exempted from the rule. As a result, eggs produced by small layer flocks may pose a greater public health risk than those from larger flocks. It is also unknown if the EQAPs developed with large flocks in mind are suitable for small- and medium-sized flocks. Therefore, a study was performed to evaluate the effectiveness of best management practices included in EQAPs in reducing SE contamination of small- and medium-sized flocks by longitudinal monitoring of their environment and eggs. A total of 59 medium-sized (3000 to 50,000 birds) and small-sized (<3000 birds) flocks from two major layer production states of the United States were enrolled and monitored for SE by culturing different types of environmental samples and shell eggs for two consecutive flock cycles. Isolated SE was characterized by phage typing, pulsed-field gel electrophoresis (PFGE), and clustered regularly interspaced short palindromic repeats-multi-virulence-locus sequence typing (CRISPR-MVLST). Fifty-four Salmonella isolates belonging to 17 serovars, 22 of which were SE, were isolated from multiple sample types. Typing revealed that SE isolates belonged to three phage types (PTs), three PFGE fingerprint patterns, and three CRISPR-MVLST SE Sequence Types (ESTs). The PT8 and JEGX01.0004 PFGE pattern, the most predominant SE types associated with foodborne illness in the United States, were represented by a majority (91%) of SE. Of the three ESTs observed, 85% SE were typed as EST4. The proportion of SE-positive hen house environment during flock cycle 2 was significantly less than the flock cycle 1, demonstrating that current EQAP practices were effective in reducing SE contamination of medium and small layer flocks.
Are large clinical trials in orthopaedic trauma justified?
Sprague, Sheila; Tornetta, Paul; Slobogean, Gerard P; O'Hara, Nathan N; McKay, Paula; Petrisor, Brad; Jeray, Kyle J; Schemitsch, Emil H; Sanders, David; Bhandari, Mohit
2018-04-20
The objective of this analysis is to evaluate the necessity of large clinical trials using FLOW trial data. The FLOW pilot study and definitive trial were factorial trials evaluating the effect of different irrigation solutions and pressures on re-operation. To explore treatment effects over time, we analyzed data from the pilot and definitive trial in increments of 250 patients until the final sample size of 2447 patients was reached. At each increment we calculated the relative risk (RR) and associated 95% confidence interval (CI) for the treatment effect, and compared the results that would have been reported at the smaller enrolments with those seen in the final, adequately powered study. The pilot study analysis of 89 patients and initial incremental enrolments in the FLOW definitive trial favored low pressure compared to high pressure (RR: 1.50, 95% CI: 0.75-3.04; RR: 1.39, 95% CI: 0.60-3.23, respectively), which is in contradiction to the final enrolment, which found no difference between high and low pressure (RR: 1.04, 95% CI: 0.81-1.33). In the soap versus saline comparison, the FLOW pilot study suggested that re-operation rate was similar in both the soap and saline groups (RR: 0.98, 95% CI: 0.50-1.92), whereas the FLOW definitive trial found that the re-operation rate was higher in the soap treatment arm (RR: 1.28, 95% CI: 1.04-1.57). Our findings suggest that studies with smaller sample sizes would have led to erroneous conclusions in the management of open fracture wounds. NCT01069315 (FLOW Pilot Study) Date of Registration: February 17, 2010, NCT00788398 (FLOW Definitive Trial) Date of Registration: November 10, 2008.
NASA Astrophysics Data System (ADS)
Shekar, Yamini
This research investigates the nano-scale pore structure of cementitious mortars undergoing delayed ettringite formation (DEF) using small angle x-ray scattering (SAXS). DEF has been known to cause expansion and cracking during later ages (around 4000 days) in concrete that has been heat cured at temperatures of 70°C or above. Though DEF normally occurs in heat cured concrete, mass cured concrete can also experience DEF. Large crystalline pressures result in smaller pore sizes. The objectives of this research are: (1) to investigate why some samples expand early than later expansion, (2) to evaluate the effects of curing conditions and pore size distributions at high temperatures, and (3) to assess the evolution of the pore size distributions over time. The most important outcome of the research is the pore sizes obtained from SAXS were used in the development of a 3-stage model. From the data obtained, the pore sizes increase in stage 1 due to initial ettringite formation and in turn filling up the smallest pores. Once the critical pore size threshold is reached (around 20nm) stage 2 is formed due to cracking which tends to decrease in the pore sizes. Finally, in stage 3, the cracking continues, therefore increasing in the pore size.
Scale and Sampling Effects on Floristic Quality
2016-01-01
Floristic Quality Assessment (FQA) is increasingly influential for making land management decisions, for directing conservation policy, and for research. But, the basic ecological properties and limitations of its metrics are ill defined and not well understood–especially those related to sample methods and scale. Nested plot data from a remnant tallgrass prairie sampled annually over a 12-year period, were used to investigate FQA properties associated with species detection rates, species misidentification rates, sample year, and sample grain/area. Plot size had no apparent effect on Mean C (an area’s average Floristic Quality level), nor did species detection levels above 65% detection. Simulated species misidentifications only affected Mean C values at greater than 10% in large plots, when the replaced species were randomly drawn from the broader county-wide species pool. Finally, FQA values were stable over the 12-year study, meaning that there was no evidence that the metrics exhibit year effects. The FQA metric Mean C is demonstrated to be robust to varied sample methodologies related to sample intensity (plot size, species detection rate), as well as sample year. These results will make FQA measures even more appealing for informing land-use decisions, policy, and research for two reasons: 1) The sampling effort needed to generate accurate and consistent site assessments with FQA measures is shown to be far lower than what has previously been assumed, and 2) the stable properties and consistent performance of metrics with respect to sample methods will allow for a remarkable level of comparability of FQA values from different sites and datasets compared to other commonly used ecological metrics. PMID:27489959
Grimplet, Jérôme; Tello, Javier; Laguna, Natalia; Ibáñez, Javier
2017-01-01
Grapevine cluster compactness has a clear impact on fruit quality and health status, as clusters with greater compactness are more susceptible to pests and diseases and ripen more asynchronously. Different parameters related to inflorescence and cluster architecture (length, width, branching, etc.), fruitfulness (number of berries, number of seeds) and berry size (length, width) contribute to the final level of compactness. From a collection of 501 clones of cultivar Garnacha Tinta, two compact and two loose clones with stable differences for cluster compactness-related traits were selected and phenotyped. Key organs and developmental stages were selected for sampling and transcriptomic analyses. Comparison of global gene expression patterns in flowers at the end of bloom allowed identification of potential gene networks with a role in determining the final berry number, berry size and ultimately cluster compactness. A large portion of the differentially expressed genes were found in networks related to cell division (carbohydrates uptake, cell wall metabolism, cell cycle, nucleic acids metabolism, cell division, DNA repair). Their greater expression level in flowers of compact clones indicated that the number of berries and the berry size at ripening appear related to the rate of cell replication in flowers during the early growth stages after pollination. In addition, fluctuations in auxin and gibberellin signaling and transport related gene expression support that they play a central role in fruit set and impact berry number and size. Other hormones, such as ethylene and jasmonate may differentially regulate indirect effects, such as defense mechanisms activation or polyphenols production. This is the first transcriptomic based analysis focused on the discovery of the underlying gene networks involved in grapevine traits of grapevine cluster compactness, berry number and berry size. PMID:28496449
Grimplet, Jérôme; Tello, Javier; Laguna, Natalia; Ibáñez, Javier
2017-01-01
Grapevine cluster compactness has a clear impact on fruit quality and health status, as clusters with greater compactness are more susceptible to pests and diseases and ripen more asynchronously. Different parameters related to inflorescence and cluster architecture (length, width, branching, etc.), fruitfulness (number of berries, number of seeds) and berry size (length, width) contribute to the final level of compactness. From a collection of 501 clones of cultivar Garnacha Tinta, two compact and two loose clones with stable differences for cluster compactness-related traits were selected and phenotyped. Key organs and developmental stages were selected for sampling and transcriptomic analyses. Comparison of global gene expression patterns in flowers at the end of bloom allowed identification of potential gene networks with a role in determining the final berry number, berry size and ultimately cluster compactness. A large portion of the differentially expressed genes were found in networks related to cell division (carbohydrates uptake, cell wall metabolism, cell cycle, nucleic acids metabolism, cell division, DNA repair). Their greater expression level in flowers of compact clones indicated that the number of berries and the berry size at ripening appear related to the rate of cell replication in flowers during the early growth stages after pollination. In addition, fluctuations in auxin and gibberellin signaling and transport related gene expression support that they play a central role in fruit set and impact berry number and size. Other hormones, such as ethylene and jasmonate may differentially regulate indirect effects, such as defense mechanisms activation or polyphenols production. This is the first transcriptomic based analysis focused on the discovery of the underlying gene networks involved in grapevine traits of grapevine cluster compactness, berry number and berry size.
NASA Technical Reports Server (NTRS)
Bond, Thomas H. (Technical Monitor); Anderson, David N.
2004-01-01
This manual reviews the derivation of the similitude relationships believed to be important to ice accretion and examines ice-accretion data to evaluate their importance. Both size scaling and test-condition scaling methods employing the resulting similarity parameters are described, and experimental icing tests performed to evaluate scaling methods are reviewed with results. The material included applies primarily to unprotected, unswept geometries, but some discussion of how to approach other situations is included as well. The studies given here and scaling methods considered are applicable only to Appendix-C icing conditions. Nearly all of the experimental results presented have been obtained in sea-level tunnels. Recommendations are given regarding which scaling methods to use for both size scaling and test-condition scaling, and icing test results are described to support those recommendations. Facility limitations and size-scaling restrictions are discussed. Finally, appendices summarize the air, water and ice properties used in NASA scaling studies, give expressions for each of the similarity parameters used and provide sample calculations for the size-scaling and test-condition scaling methods advocated.
Final Results of Shuttle MMOD Impact Database
NASA Technical Reports Server (NTRS)
Hyde, J. L.; Christiansen, E. L.; Lear, D. M.
2015-01-01
The Shuttle Hypervelocity Impact Database documents damage features on each Orbiter thought to be from micrometeoroids (MM) or orbital debris (OD). Data is divided into tables for crew module windows, payload bay door radiators and thermal protection systems along with other miscellaneous regions. The combined number of records in the database is nearly 3000. Each database record provides impact feature dimensions, location on the vehicle and relevant mission information. Additional detail on the type and size of particle that produced the damage site is provided when sampling data and definitive spectroscopic analysis results are available. Guidelines are described which were used in determining whether impact damage is from micrometeoroid or orbital debris impact based on the findings from scanning electron microscopy chemical analysis. Relationships assumed when converting from observed feature sizes in different shuttle materials to particle sizes will be presented. A small number of significant impacts on the windows, radiators and wing leading edge will be highlighted and discussed in detail, including the hypervelocity impact testing performed to estimate particle sizes that produced the damage.
NASA Astrophysics Data System (ADS)
Wohlschlögel, Markus; Steegmüller, Rainer; Schüßler, Andreas
2014-07-01
Nonmetallic inclusions in Nitinol, such as carbides (TiC) and intermetallic oxides (Ti4Ni2O x ), are known to be triggers for fatigue failure of Nitinol medical devices. These mechanically brittle inclusions are introduced during the melting process. As a result of hot and cold working in the production of Nitinol tubing inclusions are fractionalized due to the mechanical deformation imposed. While the role of inclusions regarding Nitinol fatigue performance has been studied extensively in the past, their effect on Nitinol corrosion behavior was investigated in only a limited number of studies. The focus of the present work was to understand the effect of inclusion size and distribution on the corrosion behavior of medical-device grade Nitinol tubing made from three different ingot sources during different manufacturing stages: (i) for the initial stage (hollow: round bar with centric hole), (ii) after hot drawing, and (iii) after the final drawing step (final tubing dimensions: outer diameter 0.3 mm, wall thickness 0.1 mm). For one ingot source, two different material qualities were investigated. Potentiodynamic polarization tests were performed for electropolished samples of the above-mentioned stages. Results indicate that inclusion size rather than inclusion quantity affects the susceptibility of electropolished Nitinol to pitting corrosion.
3D printed glass: surface finish and bulk properties as a function of the printing process
NASA Astrophysics Data System (ADS)
Klein, Susanne; Avery, Michael P.; Richardson, Robert; Bartlett, Paul; Frei, Regina; Simske, Steven
2015-03-01
It is impossible to print glass directly from a melt, layer by layer. Glass is not only very sensitive to temperature gradients between different layers but also to the cooling process. To achieve a glass state the melt, has to be cooled rapidly to avoid crystallization of the material and then annealed to remove cooling induced stress. In 3D-printing of glass the objects are shaped at room temperature and then fired. The material properties of the final objects are crucially dependent on the frit size of the glass powder used during shaping, the chemical formula of the binder and the firing procedure. For frit sizes below 250 μm, we seem to find a constant volume of pores of less than 5%. Decreasing frit size leads to an increase in the number of pores which then leads to an increase of opacity. The two different binders, 2- hydroxyethyl cellulose and carboxymethylcellulose sodium salt, generate very different porosities. The porosity of samples with 2-hydroxyethyl cellulose is similar to frit-only samples, whereas carboxymethylcellulose sodium salt creates a glass foam. The surface finish is determined by the material the glass comes into contact with during firing.
Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O
2011-01-01
The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.
Reduction of the capillary water absorption of foamed concrete by using the porous aggregate
NASA Astrophysics Data System (ADS)
Namsone, E.; Sahmenko, G.; Namsone, E.; Korjakins, A.
2017-10-01
The article reports on the research of reduction of the capillary water absorption of foamed concrete (FC) by using the porous aggregate such as the granules of expanded glass (EG) and the cenospheres (CS). The EG granular aggregate is produced by using recycled glass and blowing agents, melted down in high temperature. The unique structure of the EG granules is obtained where the air is kept closed inside the pellet. The use of the porous aggregate in the preparation process of the FC samples provides an opportunity to improve some physical and mechanical properties of the FC, classifying it as a product of high-performance. In this research the FC samples were produced by adding the EG granules and the CS. The capillary water absorption of hardened samples has been verified. The pore size distribution has been determined by microscope. It is a very important characteristic, specifically in the cold climate territories-where temperature often falls below zero degrees. It is necessary to prevent forming of the micro sized pores in the final structure of the material as it reduces its water absorption capacity. In addition, at a below zero temperature water inside these micro sized pores can increase them by expanding the stress on their walls during the freezing process. Research of the capillary water absorption kinetics can be practical for prevision of the FC durability.
NASA Astrophysics Data System (ADS)
Kumar, P.; Sokolik, I. N.; Nenes, A.
2011-08-01
This study reports laboratory measurements of particle size distributions, cloud condensation nuclei (CCN) activity, and droplet activation kinetics of wet generated aerosols from clays, calcite, quartz, and desert soil samples from Northern Africa, East Asia/China, and Northern America. The dependence of critical supersaturation, sc, on particle dry diameter, Ddry, is used to characterize particle-water interactions and assess the ability of Frenkel-Halsey-Hill adsorption activation theory (FHH-AT) and Köhler theory (KT) to describe the CCN activity of the considered samples. Wet generated regional dust samples produce unimodal size distributions with particle sizes as small as 40 nm, CCN activation consistent with KT, and exhibit hygroscopicity similar to inorganic salts. Wet generated clays and minerals produce a bimodal size distribution; the CCN activity of the smaller mode is consistent with KT, while the larger mode is less hydrophilic, follows activation by FHH-AT, and displays almost identical CCN activity to dry generated dust. Ion Chromatography (IC) analysis performed on regional dust samples indicates a soluble fraction that cannot explain the CCN activity of dry or wet generated dust. A mass balance and hygroscopicity closure suggests that the small amount of ions (from low solubility compounds like calcite) present in the dry dust dissolve in the aqueous suspension during the wet generation process and give rise to the observed small hygroscopic mode. Overall these results identify an artifact that may question the atmospheric relevance of dust CCN activity studies using the wet generation method. Based on the method of threshold droplet growth analysis, wet generated mineral aerosols display similar activation kinetics compared to ammonium sulfate calibration aerosol. Finally, a unified CCN activity framework that accounts for concurrent effects of solute and adsorption is developed to describe the CCN activity of aged or hygroscopic dusts.
NASA Astrophysics Data System (ADS)
Hassen, Harzali; Adel, Megriche; Arbi, Mgaidi
2018-03-01
Ultrasound-assisted co-precipitation has been used to prepare nano-sized Ni0.4Cu0.2Zn0.4Fe2O4 ferrite. Continuous (C-US) and pulsed (P-US) ultrasound modes are used at constant frequency = 20 kHz, reaction time = 2 h and pulse durations of 10 s on and 10 s off. All experiments were conducted at two temperatures 90 and 100°C. Samples were characterized by X-ray diffraction (XRD), Fourier transform spectroscopy (FT-IR), N2 adsorption isotherms at 77 k analysis (BET), transmission electron microscopy (TEM) and vibrating sample magnetometry (VSM) techniques. A nanocrystalline single-phase with particle size in the range 12-18 nm is obtained in both modes: continuous and pulsed ultrasound mode. FT-IR measurements show two absorption bands assigned to the tetrahedral and octahedral vibrations (ν1 and ν2) characteristics of cubic spinel ferrite. The specific surface area (S BET) is in the range of 110-140 m2 g-1 and an average pore size between 5.5 and 6.5 nm. The lowest values are obtained in pulsed mode. Finally, this work shows that the magnetic properties are affected by the ultrasound conditions, without affecting the particle shape. The saturation magnetization (Ms) values obtained for all samples are comparable. In P-US mode, the saturation magnetization (Ms) increases as temperature increases. Moreover, P-US mode opens a new avenue for synthesis of NiCuZn ferrites.
NASA Astrophysics Data System (ADS)
Collier, Jordan; Filipovic, Miroslav; Norris, Ray; Chow, Kate; Huynh, Minh; Banfield, Julie; Tothill, Nick; Sirothia, Sandeep Kumar; Shabala, Stanislav
2014-04-01
This proposal is a continuation of an extensive project (the core of Collier's PhD) to explore the earliest stages of AGN formation, using Gigahertz-Peaked Spectrum (GPS) and Compact Steep Spectrum (CSS) sources. Both are widely believed to represent the earliest stages of radio-loud AGN evolution, with GPS sources preceding CSS sources. In this project, we plan to (a) test this hypothesis, (b) place GPS and CSS sources into an evolutionary sequence with a number of other young AGN candidates, and (c) search for evidence of the evolving accretion mode. We will do this using high-resolution radio observations, with a number of other multiwavelength age indicators, of a carefully selected complete faint sample of 80 GPS/CSS sources. Analysis of the C2730 ELAIS-S1 data shows that we have so far met our goals, resolving the jets of 10/49 sources, and measuring accurate spectral indices from 0.843-10 GHz. This particular proposal is to almost triple the sample size by observing an additional 80 GPS/CSS sources in the Chandra Deep Field South (arguably the best-studied field) and allow a turnover frequency - linear size relation to be derived at >10-sigma. Sources found to be unresolved in our final sample will subsequently be observed with VLBI. Comparing those sources resolved with ATCA to the more compact sources resolved with VLBI will give a distribution of source sizes, helping to answer the question of whether all GPS/CSS sources grow to larger sizes.
Effect of Ca substitution on some physical properties of nano-structured and bulk Ni-ferrite samples
NASA Astrophysics Data System (ADS)
Assar, S. T.; Abosheiasha, H. F.
2015-01-01
Nanoparticles of Ni1-xCaxFe2O4 (x=0.0, 0.02, 0.04, 0.06 and 0.10) were prepared by citrate precursor method. A part of these samples was sintered at 600 °C for 2 h in order to keep the particles within the nano-size while the other part was sintered at 1000 °C to let the particles to grow to the bulk size. The effect of Ca2+ ion substitution in nickel ferrite on some structural, magnetic, electrical and thermal properties was investigated. All samples were characterized by using X-ray diffraction (XRD), transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FTIR) and vibrating sample magnetometer (VSM). A two probe method was used to measure the dc electrical conductivity whereas the photoacoustic (PA) technique was used to determine the thermal diffusivity of the samples. To interpret different experimental results for nano and bulk samples some cation distributions were assumed based on the VSM and XRD data. These suggested cation distributions give logical explanations for other experimental results such as the observed values of the absorption bands in FTIR spectra and the dc conductivity results. Finally, in the thermal measurements it was found that increasing the Ca2+ ion content causes a decrease in the thermal diffusivity of both nano and bulk samples. The explanation of this behavior is ascribed to the phonon-phonon scattering.
Bandyopadhyay, Kaustav; Uluçay, Orhan; Şakiroğlu, Muhammet; Udvardi, Michael K.; Verdier, Jerome
2016-01-01
Legume seeds are important as protein and oil source for human diet. Understanding how their final seed size is determined is crucial to improve crop yield. In this study, we analyzed seed development of three accessions of the model legume, Medicago truncatula, displaying contrasted seed size. By comparing two large seed accessions to the reference accession A17, we described mechanisms associated with large seed size determination and potential factors modulating the final seed size. We observed that early events during embryogenesis had a major impact on final seed size and a delayed heart stage embryo development resulted to large seeds. We also observed that the difference in seed growth rate was mainly due to a difference in embryo cell number, implicating a role of cell division rate. Large seed accessions could be explained by an extended period of cell division due to a longer embryogenesis phase. According to our observations and recent reports, we observed that auxin (IAA) and abscisic acid (ABA) ratio could be a key determinant of cell division regulation at the end of embryogenesis. Overall, our study highlights that timing of events occurring during early seed development play decisive role for final seed size determination. PMID:27618017
Metal wastage design guidelines for bubbling fluidized-bed combustors. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyczkowski, R.W.; Podolski, W.F.; Bouillard, J.X.
These metal wastage design guidelines identify relationships between metal wastage and (1) design parameters (such as tube size, tube spacing and pitch, tube bundle and fluidized-bed height to distributor, and heat exchanger tube material properties) and (2) operating parameters (such as fluidizing velocity, particle size, particle hardness, and angularity). The guidelines are of both a quantitative and qualitative nature. Simplified mechanistic models are described, which account for the essential hydrodynamics and metal wastage processes occurring in bubbling fluidized beds. The empirical correlational approach complements the use of these models in the development of these design guidelines. Data used for modelmore » and guideline validation are summarized and referenced. Sample calculations and recommended design procedures are included. The influences of dependent variables on metal wastage, such as solids velocity, bubble size, and in-bed pressure fluctuations, are discussed.« less
Topological Hall and Spin Hall Effects in Disordered Skyrmionic Textures
NASA Astrophysics Data System (ADS)
Ndiaye, Papa Birame; Akosa, Collins; Manchon, Aurelien; Spintronics Theory Group Team
We carry out a throughout study of the topological Hall and topological spin Hall effects in disordered skyrmionic systems: the dimensionless (spin) Hall angles are evaluated across the energy band structure in the multiprobe Landauer-Büttiker formalism and their link to the effective magnetic field emerging from the real space topology of the spin texture is highlighted. We discuss these results for an optimal skyrmion size and for various sizes of the sample and found that the adiabatic approximation still holds for large skyrmions as well as for few atomic size-nanoskyrmions. Finally, we test the robustness of the topological signals against disorder strength and show that topological Hall effect is highly sensitive to momentum scattering. This work was supported by the King Abdullah University of Science and Technology (KAUST) through the Award No OSR-CRG URF/1/1693-01 from the Office of Sponsored Research (OSR).
Abbarchi, Marco; Naffouti, Meher; Vial, Benjamin; Benkouider, Abdelmalek; Lermusiaux, Laurent; Favre, Luc; Ronda, Antoine; Bidault, Sébastien; Berbezier, Isabelle; Bonod, Nicolas
2014-11-25
Subwavelength-sized dielectric Mie resonators have recently emerged as a promising photonic platform, as they combine the advantages of dielectric microstructures and metallic nanoparticles supporting surface plasmon polaritons. Here, we report the capabilities of a dewetting-based process, independent of the sample size, to fabricate Si-based resonators over large scales starting from commercial silicon-on-insulator (SOI) substrates. Spontaneous dewetting is shown to allow the production of monocrystalline Mie-resonators that feature two resonant modes in the visible spectrum, as observed in confocal scattering spectroscopy. Homogeneous scattering responses and improved spatial ordering of the Si-based resonators are observed when dewetting is assisted by electron beam lithography. Finally, exploiting different thermal agglomeration regimes, we highlight the versatility of this technique, which, when assisted by focused ion beam nanopatterning, produces monocrystalline nanocrystals with ad hoc size, position, and organization in complex multimers.
Response Variability in Commercial MOSFET SEE Qualification
George, J. S.; Clymer, D. A.; Turflinger, T. L.; ...
2016-12-01
Single-event effects (SEE) evaluation of five different part types of next generation, commercial trench MOSFETs indicates large part-to-part variation in determining a safe operating area (SOA) for drain-source voltage (V DS) following a test campaign that exposed >50 samples per part type to heavy ions. These results suggest a determination of a SOA using small sample sizes may fail to capture the full extent of the part-to-part variability. An example method is discussed for establishing a Safe Operating Area using a one-sided statistical tolerance limit based on the number of test samples. Finally, burn-in is shown to be a criticalmore » factor in reducing part-to-part variation in part response. Implications for radiation qualification requirements are also explored.« less
Thallium Bromide Deposited Using Spray Coating
NASA Astrophysics Data System (ADS)
Ferreira, E. S.; Mulato, M.
2012-08-01
Spray coating was used to produce thallium bromide samples on glass substrates. The influence of several fabrication parameters on the final structural properties of the samples was investigated. Substrate position, substrate temperature, solution concentration, carrying gas, and solution flow were varied systematically, the physical deposition mechanism involved in each case being discussed. Total deposition time of about 3.5 h can lead to 62-μm-thick films, comprising completely packed micrometer-sized crystalline grains. X-ray diffraction and scanning electron microscopy were used to characterize the samples. On the basis of the experimental data, the optimum fabrication conditions were identified. The technique offers an alternative method for fast, cheap fabrication of large-area devices for the detection of high-energy radiation, i.e., X-rays and γ-rays, in medical imaging.
Response Variability in Commercial MOSFET SEE Qualification
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, J. S.; Clymer, D. A.; Turflinger, T. L.
Single-event effects (SEE) evaluation of five different part types of next generation, commercial trench MOSFETs indicates large part-to-part variation in determining a safe operating area (SOA) for drain-source voltage (V DS) following a test campaign that exposed >50 samples per part type to heavy ions. These results suggest a determination of a SOA using small sample sizes may fail to capture the full extent of the part-to-part variability. An example method is discussed for establishing a Safe Operating Area using a one-sided statistical tolerance limit based on the number of test samples. Finally, burn-in is shown to be a criticalmore » factor in reducing part-to-part variation in part response. Implications for radiation qualification requirements are also explored.« less
Preparation and characterization of cellulose-based foams via microwave curing
Demitri, Christian; Giuri, Antonella; Raucci, Maria Grazia; Giugliano, Daniela; Madaghiele, Marta; Sannino, Alessandro; Ambrosio, Luigi
2014-01-01
In this work, a mixture of a sodium salt of carboxymethylcellulose (CMCNa) and polyethylene glycol diacrylate (PEGDA700) was used for the preparation of a microporous structure by using the combination of two different procedures. First, physical foaming was induced using Pluronic as a blowing agent, followed by a chemical stabilization. This second step was carried out by means of an azobis(2-methylpropionamidine)dihydrochloride as the thermoinitiator (TI). This reaction was activated by heating the sample homogeneously using a microwave generator. Finally, the influence of different CMCNa and PEGDA700 ratios on the final properties of the foams was investigated. The viscosity, water absorption capacity, elastic modulus and porous structure were evaluated for each sample. In addition, preliminary biological characterization was carried out with the aim to prove the biocompatibility of the resulting material. The foam, including 20% of PEGDA700 in the mixture, demonstrated higher viscosity and stability before thermo-polymerization. In addition, increased water absorption capacity, mechanical resistance and a more uniform microporous structure were obtained for this sample. In particular, foam with 3% of CMCNa shows a hierarchical structure with open pores of different sizes. This morphology increased the properties of the foams. The full set of samples demonstrated an excellent biocompatibility profile with a good cell proliferation rate of more than 7 days. PMID:24501679
Thompson, Jamie N.; Beauchamp, David A.
2014-01-01
We evaluated freshwater growth and survival from juvenile (ages 0–3) to smolt (ages 1–5) and adult stages in wild steelhead Oncorhynchus mykiss sampled in different precipitation zones of the Skagit River basin, Washington. Our objectives were to determine whether significant size-selective mortality (SSM) in steelhead could be detected between early and later freshwater stages and between each of these freshwater stages and returning adults and, if so, how SSM varied between these life stages and mixed and snow precipitation zones. Scale-based size-at-annulus comparisons indicated that steelhead in the snow zone were significantly larger at annulus 1 than those in the mixed rain–snow zone. Size at annuli 2 and 3 did not differ between precipitation zones, and we found no precipitation zone × life stage interaction effect on size at annulus. Significant freshwater and marine SSM was evident between the juvenile and adult samples at annulus 1 and between each life stage at annuli 2 and 3. Rapid growth between the final freshwater annulus and the smolt migration did not improve survival to adulthood; rather, it appears that survival in the marine environment may be driven by an overall higher growth rate set earlier in life, which results in a larger size at smolt migration. Efforts for recovery of threatened Puget Sound steelhead could benefit by considering that SSM between freshwater and marine life stages can be partially attributed to growth attained in freshwater habitats and by identifying those factors that limit growth during early life stages.
Implications of grain size variation in magnetic field alignment of block copolymer blends
Rokhlenko, Yekaterina; Majewski, Pawel W.; Larson, Steven R.; ...
2017-03-28
Recent experiments have highlighted the intrinsic magnetic anisotropy in coil–coil diblock copolymers, specifically in poly(styrene- block-4-vinylpyridine) (PS- b-P4VP), that enables magnetic field alignment at field strengths of a few tesla. We consider here the alignment response of two low molecular weight (MW) lamallae-forming PS- b-P4VP systems. Cooling across the disorder–order transition temperature (T odt) results in strong alignment for the higher MW sample (5.5K), whereas little alignment is discernible for the lower MW system (3.6K). This disparity under otherwise identical conditions of field strength and cooling rate suggests that different average grain sizes are produced during slow cooling of thesemore » materials, with larger grains formed in the higher MW material. Blending the block copolymers results in homogeneous samples which display T odt, d-spacings, and grain sizes that are intermediate between the two neat diblocks. Similarly, the alignment quality displays a smooth variation with the concentration of the higher MW diblock in the blends, and the size of grains likewise interpolates between limits set by the neat diblocks, with a factor of 3.5× difference in the grain size observed in high vs low MW neat diblocks. Finally, these results highlight the importance of grain growth kinetics in dictating the field response in block copolymers and suggests an unconventional route for the manipulation of such kinetics.« less
Cengiz, Ibrahim Fatih; Oliveira, Joaquim Miguel; Reis, Rui L
2017-08-01
Quantitative assessment of micro-structure of materials is of key importance in many fields including tissue engineering, biology, and dentistry. Micro-computed tomography (µ-CT) is an intensively used non-destructive technique. However, the acquisition parameters such as pixel size and rotation step may have significant effects on the obtained results. In this study, a set of tissue engineering scaffolds including examples of natural and synthetic polymers, and ceramics were analyzed. We comprehensively compared the quantitative results of µ-CT characterization using 15 acquisition scenarios that differ in the combination of the pixel size and rotation step. The results showed that the acquisition parameters could statistically significantly affect the quantified mean porosity, mean pore size, and mean wall thickness of the scaffolds. The effects are also practically important since the differences can be as high as 24% regarding the mean porosity in average, and 19.5 h and 166 GB regarding the characterization time and data storage per sample with a relatively small volume. This study showed in a quantitative manner the effects of such a wide range of acquisition scenarios on the final data, as well as the characterization time and data storage per sample. Herein, a clear picture of the effects of the pixel size and rotation step on the results is provided which can notably be useful to refine the practice of µ-CT characterization of scaffolds and economize the related resources.
Implications of grain size variation in magnetic field alignment of block copolymer blends
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rokhlenko, Yekaterina; Majewski, Pawel W.; Larson, Steven R.
Recent experiments have highlighted the intrinsic magnetic anisotropy in coil–coil diblock copolymers, specifically in poly(styrene- block-4-vinylpyridine) (PS- b-P4VP), that enables magnetic field alignment at field strengths of a few tesla. We consider here the alignment response of two low molecular weight (MW) lamallae-forming PS- b-P4VP systems. Cooling across the disorder–order transition temperature (T odt) results in strong alignment for the higher MW sample (5.5K), whereas little alignment is discernible for the lower MW system (3.6K). This disparity under otherwise identical conditions of field strength and cooling rate suggests that different average grain sizes are produced during slow cooling of thesemore » materials, with larger grains formed in the higher MW material. Blending the block copolymers results in homogeneous samples which display T odt, d-spacings, and grain sizes that are intermediate between the two neat diblocks. Similarly, the alignment quality displays a smooth variation with the concentration of the higher MW diblock in the blends, and the size of grains likewise interpolates between limits set by the neat diblocks, with a factor of 3.5× difference in the grain size observed in high vs low MW neat diblocks. Finally, these results highlight the importance of grain growth kinetics in dictating the field response in block copolymers and suggests an unconventional route for the manipulation of such kinetics.« less
Guarnieri, Adriano; Moreno-Montañés, Javier; Sabater, Alfonso L; Gosende-Chico, Inmaculada; Bonet-Farriol, Elvira
2013-11-01
To analyze the changes in incision sizes after implantation of a toric intraocular lens (IOL) using 2 methods. Department of Ophthalmology, Clínica Universidad de Navarra, Pamplona, Spain. Prospective case series. Coaxial phacoemulsification and IOL implantation through a 2.2 mm clear corneal incision using a cartridge injector were performed. Wound-assisted or cartridge-insertion techniques were used to implant the IOLs. The results were analyzed according to IOL spherical and cylindrical powers. Corneal hysteresis (CH) and the corneal resistance factor (CRF) were measured and evaluated based on the changes in incision size. Incision size increased in 30 (41.7%) of 72 eyes in the wound-assisted group and 71 (98.6%) of 72 eyes in the cartridge-insertion group. The mean incision size after IOL implantation was 2.27 mm ± 0.06 (SD) and 2.37 ± 0.05 mm, respectively (P<.01). The final incision size and IOL spherical power in the wound-assisted technique group (P=.02) and the cartridge-insertion technique group (P=.03) were correlated significantly; IOL toricity was not (P=.19 and P=.28, respectively). The CH and CRF values were not correlated with the final incision size. The final incision size and the changes in incision size after IOL implantation were greater with the cartridge-insertion technique than with the wound-assisted technique. The increase was related to IOL spherical power in both groups but not to IOL toricity. Corneal biomechanical properties were not correlated with the final incision size. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Image simulation for electron energy loss spectroscopy
Oxley, Mark P.; Pennycook, Stephen J.
2007-10-22
In this paper, aberration correction of the probe forming optics of the scanning transmission electron microscope has allowed the probe-forming aperture to be increased in size, resulting in probes of the order of 1 Å in diameter. The next generation of correctors promise even smaller probes. Improved spectrometer optics also offers the possibility of larger electron energy loss spectrometry detectors. The localization of images based on core-loss electron energy loss spectroscopy is examined as function of both probe-forming aperture and detector size. The effective ionization is nonlocal in nature, and two common local approximations are compared to full nonlocal calculations.more » Finally, the affect of the channelling of the electron probe within the sample is also discussed.« less
NASA Astrophysics Data System (ADS)
Cao, Haitao; Moutalbi, Nahed; Harnois, Christelle; Hu, Rui; Li, Jinshan; Zhou, Lian; Noudem, Jacques G.
2010-01-01
Mono-domain YBa 2Cu 3O 7-x (Y123) bulk superconductors have been processed using seeded infiltration growth technique (SIG). The combination of melt infiltrated liquid source (Ba 3Cu 5O 8) into the Y 2BaCuO 5 (Y211) pre-form and the nucleation of Y123 domain from SmBa 2Cu 3O 7 crystal seed has been investigated. The different configurations of SIG process were compared in this study. In addition, the effect of the starting Y211 particles size has been studied. The results reveal that, the Y211 particle size and different configurations strongly influence the properties of the final bulk superconductor sample.
Ultrastructurally-smooth thick partitioning and volume stitching for larger-scale connectomics
Hayworth, Kenneth J.; Xu, C. Shan; Lu, Zhiyuan; Knott, Graham W.; Fetter, Richard D.; Tapia, Juan Carlos; Lichtman, Jeff W.; Hess, Harald F.
2015-01-01
FIB-SEM has become an essential tool for studying neural tissue at resolutions below 10×10×10 nm, producing datasets superior for automatic connectome tracing. We present a technical advance, ultrathick sectioning, which reliably subdivides embedded tissue samples into chunks (20 µm thick) optimally sized and mounted for efficient, parallel FIB-SEM imaging. These chunks are imaged separately and then ‘volume stitched’ back together, producing a final 3D dataset suitable for connectome tracing. PMID:25686390
Model of the final borehole geometry for helical laser drilling
NASA Astrophysics Data System (ADS)
Kroschel, Alexander; Michalowski, Andreas; Graf, Thomas
2018-05-01
A model for predicting the borehole geometry for laser drilling is presented based on the calculation of a surface of constant absorbed fluence. It is applicable to helical drilling of through-holes with ultrashort laser pulses. The threshold fluence describing the borehole surface is fitted for best agreement with experimental data in the form of cross-sections of through-holes of different shapes and sizes in stainless steel samples. The fitted value is similar to ablation threshold fluence values reported for laser ablation models.
Adhapure, N.N.; Dhakephalkar, P.K.; Dhakephalkar, A.P.; Tembhurkar, V.R.; Rajgure, A.V.; Deshmukh, A.M.
2014-01-01
Very recently bioleaching has been used for removing metals from electronic waste. Most of the research has been targeted to using pulverized PCBs for bioleaching where precipitate formed during bioleaching contaminates the pulverized PCB sample and making the overall metal recovery process more complicated. In addition to that, such mixing of pulverized sample with precipitate also creates problems for the final separation of non metallic fraction of PCB sample. In the present investigation we attempted the use of large pieces of printed circuit boards instead of pulverized sample for removal of metals. Use of large pieces of PCBs for bioleaching was restricted due to the chemical coating present on PCBs, the problem has been solved by chemical treatment of PCBs prior to bioleaching. In short,•Large pieces of PCB can be used for bioleaching instead of pulverized PCB sample.•Metallic portion on PCBs can be made accessible to bacteria with prior chemical treatment of PCBs.•Complete metal removal obtained on PCB pieces of size 4 cm × 2.5 cm with the exception of solder traces. The final metal free PCBs (non metallic) can be easily recycled and in this way the overall recycling process (metallic and non metallic part) of PCBs becomes simple. PMID:26150951
Adhapure, N N; Dhakephalkar, P K; Dhakephalkar, A P; Tembhurkar, V R; Rajgure, A V; Deshmukh, A M
2014-01-01
Very recently bioleaching has been used for removing metals from electronic waste. Most of the research has been targeted to using pulverized PCBs for bioleaching where precipitate formed during bioleaching contaminates the pulverized PCB sample and making the overall metal recovery process more complicated. In addition to that, such mixing of pulverized sample with precipitate also creates problems for the final separation of non metallic fraction of PCB sample. In the present investigation we attempted the use of large pieces of printed circuit boards instead of pulverized sample for removal of metals. Use of large pieces of PCBs for bioleaching was restricted due to the chemical coating present on PCBs, the problem has been solved by chemical treatment of PCBs prior to bioleaching. In short,•Large pieces of PCB can be used for bioleaching instead of pulverized PCB sample.•Metallic portion on PCBs can be made accessible to bacteria with prior chemical treatment of PCBs.•Complete metal removal obtained on PCB pieces of size 4 cm × 2.5 cm with the exception of solder traces. The final metal free PCBs (non metallic) can be easily recycled and in this way the overall recycling process (metallic and non metallic part) of PCBs becomes simple.
Macrophage Migration Inhibitory Factor for the Early Prediction of Infarct Size
Chan, William; White, David A.; Wang, Xin‐Yu; Bai, Ru‐Feng; Liu, Yang; Yu, Hai‐Yi; Zhang, You‐Yi; Fan, Fenling; Schneider, Hans G.; Duffy, Stephen J.; Taylor, Andrew J.; Du, Xiao‐Jun; Gao, Wei; Gao, Xiao‐Ming; Dart, Anthony M.
2013-01-01
Background Early diagnosis and knowledge of infarct size is critical for the management of acute myocardial infarction (MI). We evaluated whether early elevated plasma level of macrophage migration inhibitory factor (MIF) is useful for these purposes in patients with ST‐elevation MI (STEMI). Methods and Results We first studied MIF level in plasma and the myocardium in mice and determined infarct size. MI for 15 or 60 minutes resulted in 2.5‐fold increase over control values in plasma MIF levels while MIF content in the ischemic myocardium reduced by 50% and plasma MIF levels correlated with myocardium‐at‐risk and infarct size at both time‐points (P<0.01). In patients with STEMI, we obtained admission plasma samples and measured MIF, conventional troponins (TnI, TnT), high sensitive TnI (hsTnI), creatine kinase (CK), CK‐MB, and myoglobin. Infarct size was assessed by cardiac magnetic resonance (CMR) imaging. Patients with chronic stable angina and healthy volunteers were studied as controls. Of 374 STEMI patients, 68% had elevated admission MIF levels above the highest value in healthy controls (>41.6 ng/mL), a proportion similar to hsTnI (75%) and TnI (50%), but greater than other biomarkers studied (20% to 31%, all P<0.05 versus MIF). Only admission MIF levels correlated with CMR‐derived infarct size, ventricular volumes and ejection fraction (n=42, r=0.46 to 0.77, all P<0.01) at 3 day and 3 months post‐MI. Conclusion Plasma MIF levels are elevated in a high proportion of STEMI patients at the first obtainable sample and these levels are predictive of final infarct size and the extent of cardiac remodeling. PMID:24096574
Gigault, Julien; El Hadri, Hind; Reynaud, Stéphanie; Deniau, Elise; Grassl, Bruno
2017-11-01
In the last 10 years, asymmetrical flow field flow fractionation (AF4) has been one of the most promising approaches to characterize colloidal particles. Nevertheless, despite its potentialities, it is still considered a complex technique to set up, and the theory is difficult to apply for the characterization of complex samples containing submicron particles and nanoparticles. In the present work, we developed and propose a simple analytical strategy to rapidly determine the presence of several submicron populations in an unknown sample with one programmed AF4 method. To illustrate this method, we analyzed polystyrene particles and fullerene aggregates of size covering the whole colloidal size distribution. A global and fast AF4 method (method O) allowed us to screen the presence of particles with size ranging from 1 to 800 nm. By examination of the fractionating power F d , as proposed in the literature, convenient fractionation resolution was obtained for size ranging from 10 to 400 nm. The global F d values, as well as the steric inversion diameter, for the whole colloidal size distribution correspond to the predicted values obtained by model studies. On the basis of this method and without the channel components or mobile phase composition being changed, four isocratic subfraction methods were performed to achieve further high-resolution separation as a function of different size classes: 10-100 nm, 100-200 nm, 200-450 nm, and 450-800 nm in diameter. Finally, all the methods developed were applied in characterization of nanoplastics, which has received great attention in recent years. Graphical Absract Characterization of the nanoplastics by asymmetrical flow field flow fractionation within the colloidal size range.
NASA Astrophysics Data System (ADS)
D'Addabbo, M.; Sulpizio, R.; Guidi, M.; Capitani, G.; Mantecca, P.; Zanchetta, G.
2015-12-01
Leaching experiments were carried out on fresh ash samples from Popocatépetl 2012, Etna 2011, and Etna 2012 eruptions, in order to investigate the release of compounds in both double-deionized and lake (Lake Ohrid, FYR of Macedonia) waters. The experiments were carried out using different grain sizes and variable stirring times (from 30 min to 7 days). Results were discussed in the light of changing pH and release of compounds for the different leachates. In particular, Etna samples induced alkalinization, and Popocatépetl samples induced acidification of the corresponding leachates. The release of different elements does not show correlation with the stirring time, with the measured maximum concentrations reached in the first hours of washing. General inverse correlation with grain size was observed only for Na+, K+, Cl-, Ca2+, Mg2+, SO42-, and Mn2+, while the other analysed elements show a complex, scattering relationship with grain size. Geochemical modelling highlights leachates' saturation only for F and Si, with Popocatépetl samples sometimes showing saturation in Fe. The analysed leachates are classified as undrinkable for humans on the basis of European laws, due to excess in F-, Mn2+, Fe, and SO42- (the latter only for Popocatépetl samples). Finally, the Etna 2012 and Popocatépetl leachates were used for toxicity experiments on living biota (Xenopus laevis). They are mildly toxic, and no significant differences exist between the toxic profiles of the two leachates. In particular, no significant embryo mortality was observed; while even at high dilutions, the leachates produced more than 20 % of malformed larvae.
Melo, C H; Sousa, F C; Batista, R I P T; Sanchez, D J D; Souza-Fabjan, J M G; Freitas, V J F; Melo, L M; Teixeira, D I A
2015-07-31
The present study aimed to compare laparoscopic (LP) and ultrasound-guided (US) biopsy methods to obtain either liver or splenic tissue samples for ectopic gene expression analysis in transgenic goats. Tissue samples were collected from human granulocyte colony stimulating factor (hG-CSF)-transgenic bucks and submitted to real-time PCR for the endogenous genes (Sp1, Baff, and Gapdh) and the transgene (hG-CSF). Both LP and US biopsy methods were successful in obtaining liver and splenic samples that could be analyzed by PCR (i.e., sufficient sample sizes and RNA yield were obtained). Although the number of attempts made to obtain the tissue samples was similar (P > 0.05), LP procedures took considerably longer than the US method (P = 0.03). Finally, transgene transcripts were not detected in spleen or liver samples. Thus, for the phenotypic characterization of a transgenic goat line, investigation of ectopic gene expression can be made successfully by LP or US biopsy, avoiding the traditional approach of euthanasia.
Leonard, Russell L.; Gray, Sharon K.; Alvarez, Carlos J.; ...
2015-05-21
In this paper, a fluorochlorozirconate (FCZ) glass-ceramic containing orthorhombic barium chloride crystals doped with divalent europium was evaluated for use as a storage phosphor in gamma-ray imaging. X-ray diffraction and phosphorimetry of the glass-ceramic sample showed the presence of a significant amount of orthorhombic barium chloride crystals in the glass matrix. Transmission electron microscopy and scanning electron microscopy were used to identify crystal size, structure, and morphology. The size of the orthorhombic barium chloride crystals in the FCZ glass matrix was very large, ~0.5–0.7 μm, which can limit image resolution. The FCZ glass-ceramic sample was exposed to 1 MeV gammamore » rays to determine its photostimulated emission characteristics at high energies, which were found to be suitable for imaging applications. Test images were made at 2 MeV energies using gap and step wedge phantoms. Gaps as small as 101.6 μm in a 440 stainless steel phantom were imaged using the sample imaging plate. Analysis of an image created using a depleted uranium step wedge phantom showed that emission is proportional to incident energy at the sample and the estimated absorbed dose. Finally, the results showed that the sample imaging plate has potential for gamma-ray-computed radiography and dosimetry applications.« less
Study the fragment size distribution in dynamic fragmentation of laser shock loding tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Chu, Genbai; Shui, Min; Xi, Tao; Zhao, Yongqiang; Gu, Yuqiu
2017-06-01
Characterizing the distribution of fragment size produced from dynamic fragmentation process is very important for fundamental science like predicting material dymanic response performance and for a variety of engineering applications. However, only a few data about fragment mass or size have been obtained due to its great challenge in its dynamic measurement. This paper would focus on investigating the fragment size distribution from the dynamic fragmentation of laser shock-loaded metal. Material ejection of tin sample with wedge shape groove in the free surface is collected with soft recovery technique. Via fine post-shot analysis techniques including X-ray micro-tomography and the improved watershed method, it is found that fragments can be well detected. To characterize their size distributions, a random geometric statistics method based on Poisson mixtures was derived for dynamic heterogeneous fragmentation problem, which leads to a linear combinational exponential distribution. Finally we examined the size distribution of laser shock-loaded tin with the derived model, and provided comparisons with other state-of-art models. The resulting comparisons prove that our proposed model can provide more reasonable fitting result for laser shock-loaded metal.
Perspective: Size selected clusters for catalysis and electrochemistry
NASA Astrophysics Data System (ADS)
Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro; Vajda, Stefan
2018-03-01
Size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization, and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition, cluster-support interactions, and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modeling based on density functional theory sampling of local minima and energy barriers or ab initio molecular dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Finally, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.
BLISTERING AND EXPLOSIVE DESORPTION OF IRRADIATED AMMONIA-WATER MIXTURES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loeffler, M. J.; Baragiola, R. A., E-mail: mark.loeffler@nasa.gov, E-mail: raul@virginia.edu
2012-01-10
We present laboratory studies on the thermal evolution of a solid ammonia-water mixture after it has been irradiated at 20, 70, and 120 K. In samples irradiated at {<=}70 K, we observed fast outbursts that appear to indicate grain ejection and correlate well with the formation of micron-sized scattering centers. The occurrence of this phenomenon at the lower irradiation temperatures indicates that our results may be most relevant for understanding the release of gas and grains by comets and the surfaces of some of the colder icy satellites. We observe outgassing at temperatures below those where ice sublimates, which suggestsmore » that comets containing radiolyzed material may have outbursts farther from the Sun that those that are passive. In addition, the estimated size of the grains ejected from our sample is on the order of the size of E-ring particles, suggesting that our results give a plausible mechanism for how micron-sized grains could be formed from an icy surface. Finally, we propose that the presence of the {approx}4.5 {mu}m N{sub 2}O absorption band on an icy surface in outer space will serve to provide indirect evidence for radiation-processed ices that originally contained ammonia or nitrogen, which could be particularly useful since nitrogen is such a weak absorber in the infrared and ammonia is rapidly decomposed by radiolysis.« less
[Methodological design of the National Health and Nutrition Survey 2016].
Romero-Martínez, Martín; Shamah-Levy, Teresa; Cuevas-Nasu, Lucía; Gómez-Humarán, Ignacio Méndez; Gaona-Pineda, Elsa Berenice; Gómez-Acosta, Luz María; Rivera-Dommarco, Juan Ángel; Hernández-Ávila, Mauricio
2017-01-01
Describe the design methodology of the halfway health and nutrition national survey (Ensanut-MC) 2016. The Ensanut-MC is a national probabilistic survey whose objective population are the inhabitants of private households in Mexico. The sample size was determined to make inferences on the urban and rural areas in four regions. Describes main design elements: target population, topics of study, sampling procedure, measurement procedure and logistics organization. A final sample of 9 479 completed household interviews, and a sample of 16 591 individual interviews. The response rate for households was 77.9%, and the response rate for individuals was 91.9%. The Ensanut-MC probabilistic design allows valid statistical inferences about interest parameters for Mexico´s public health and nutrition, specifically on overweight, obesity and diabetes mellitus. Updated information also supports the monitoring, updating and formulation of new policies and priority programs.
Successful Sampling Strategy Advances Laboratory Studies of NMR Logging in Unconsolidated Aquifers
NASA Astrophysics Data System (ADS)
Behroozmand, Ahmad A.; Knight, Rosemary; Müller-Petke, Mike; Auken, Esben; Barfod, Adrian A. S.; Ferré, Ty P. A.; Vilhelmsen, Troels N.; Johnson, Carole D.; Christiansen, Anders V.
2017-11-01
The nuclear magnetic resonance (NMR) technique has become popular in groundwater studies because it responds directly to the presence and mobility of water in a porous medium. There is a need to conduct laboratory experiments to aid in the development of NMR hydraulic conductivity models, as is typically done in the petroleum industry. However, the challenge has been obtaining high-quality laboratory samples from unconsolidated aquifers. At a study site in Denmark, we employed sonic drilling, which minimizes the disturbance of the surrounding material, and extracted twelve 7.6 cm diameter samples for laboratory measurements. We present a detailed comparison of the acquired laboratory and logging NMR data. The agreement observed between the laboratory and logging data suggests that the methodologies proposed in this study provide good conditions for studying NMR measurements of unconsolidated near-surface aquifers. Finally, we show how laboratory sample size and condition impact the NMR measurements.
Kumar, S; Panwar, J; Vyas, A; Sharma, J; Goutham, B; Duraiswamy, P; Kulkarni, S
2011-02-01
The aim of the study was to determine if frequency of tooth cleaning varies with social group, family size, bedtime and other personal hygiene habits among school children. Target population comprised schoolchildren aged 8-16 years of Udaipur district attending public schools. A two stage cluster random sampling procedure was executed to collect the representative sample, consequently final sample size accounted to 852 children. Data were collected by means of structured questionnaires which consisted of questions related to oral hygiene habits including a few general hygiene habits, bed time, family size, family income and dental visiting habits. The results show that 30.5% of the total sample cleaned their teeth twice or more daily and there was no significant difference between the genders for tooth cleaning frequency. Logistic regression analysis revealed that older children and those having less than two siblings were more likely to clean their teeth twice a day than the younger ones and children with more than two siblings. Furthermore, frequency of tooth cleaning was significantly lower among children of parents with low level of education and less annual income as compared with those of high education and more annual income. In addition, tooth cleaning habits were more regular in children using tooth paste and regularly visiting to the dentist. This study observed that tooth cleaning is not an isolated behaviour, but is a part of multifarious pattern of various social and behavioural factors. © 2009 The Authors. Journal compilation © 2009 Blackwell Munksgaard.
A Mars Sample Return Sample Handling System
NASA Technical Reports Server (NTRS)
Wilson, David; Stroker, Carol
2013-01-01
We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory alloy caps [5] if the heating does not affect the sample, or by crimping caps similar to bottle capping. We prefer cap sealing surfaces be external to the cup rim to prevent sample dust inside the cups interfering with sealing, or, contamination of the sample by Teflon seal elements (if adopted). Finally the sample collection rover, or a Fetch rover, selects cups with best choice samples and loads them into a sample tray, before delivering it to the Earth Return Vehicle (ERV) in the MSR Dragon capsule as described in [1] (Fig 1). This ensures best use of the MSR payload mass allowance. A 3 meter long jointed robot arm is extended from the Dragon capsule's crew hatch, retrieves the sample tray and inserts it into the sample canister payload located on the ERV stage. The robot arm has capacity to obtain grab samples in the event of a rover failure. The sample canister has a robot arm capture casting to enable capture by crewed or robot spacecraft when it returns to Earth orbit
NASA Astrophysics Data System (ADS)
Bultreys, Tom; Stappen, Jeroen Van; Kock, Tim De; Boever, Wesley De; Boone, Marijn A.; Hoorebeke, Luc Van; Cnudde, Veerle
2016-11-01
The relative permeability behavior of rocks with wide ranges of pore sizes is in many cases still poorly understood and is difficult to model at the pore scale. In this work, we investigate the capillary pressure and relative permeability behavior of three outcrop carbonates and two tight reservoir sandstones with wide, multimodal pore size distributions. To examine how the drainage and imbibition properties of these complex rock types are influenced by the connectivity of macropores to each other and to zones with unresolved small-scale porosity, we apply a previously presented microcomputed-tomography-based multiscale pore network model to these samples. The sensitivity to the properties of the small-scale porosity is studied by performing simulations with different artificial sphere-packing-based networks as a proxy for these pores. Finally, the mixed-wet water-flooding behavior of the samples is investigated, assuming different wettability distributions for the microporosity and macroporosity. While this work is not an attempt to perform predictive modeling, it seeks to qualitatively explain the behavior of the investigated samples and illustrates some of the most recent developments in multiscale pore network modeling.
Limits to Forecasting Precision for Outbreaks of Directly Transmitted Diseases
Drake, John M
2006-01-01
Background Early warning systems for outbreaks of infectious diseases are an important application of the ecological theory of epidemics. A key variable predicted by early warning systems is the final outbreak size. However, for directly transmitted diseases, the stochastic contact process by which outbreaks develop entails fundamental limits to the precision with which the final size can be predicted. Methods and Findings I studied how the expected final outbreak size and the coefficient of variation in the final size of outbreaks scale with control effectiveness and the rate of infectious contacts in the simple stochastic epidemic. As examples, I parameterized this model with data on observed ranges for the basic reproductive ratio (R 0) of nine directly transmitted diseases. I also present results from a new model, the simple stochastic epidemic with delayed-onset intervention, in which an initially supercritical outbreak (R 0 > 1) is brought under control after a delay. Conclusion The coefficient of variation of final outbreak size in the subcritical case (R 0 < 1) will be greater than one for any outbreak in which the removal rate is less than approximately 2.41 times the rate of infectious contacts, implying that for many transmissible diseases precise forecasts of the final outbreak size will be unattainable. In the delayed-onset model, the coefficient of variation (CV) was generally large (CV > 1) and increased with the delay between the start of the epidemic and intervention, and with the average outbreak size. These results suggest that early warning systems for infectious diseases should not focus exclusively on predicting outbreak size but should consider other characteristics of outbreaks such as the timing of disease emergence. PMID:16435887
Influence of warm rolling temperature on ferrite recrystallization in low C and IF steels
NASA Astrophysics Data System (ADS)
Barnett, Matthew Robert
Experiments involving single pass laboratory rolling and isothermal salt bath annealing were carried out; three steels were studied: a titanium stabilized interstitial free grade and two low carbon grades, one of which contained a particularly low level of manganese (˜0.009wt.%). The two low carbon grades were produced such that any complication from AlN precipitation was avoided. X-ray, neutron diffraction, optical metallography and mechanical testing measurements were carried out on the samples before and after annealing. The main aim of this work was to further the understanding of the metallurgy of recrystallization after ferrite rolling at temperatures between room temperature and 700sp°C. Deformation textures, recrystallization kinetics, final grain sizes and recrystallization textures were quantified for all the samples and experimental conditions. A major conclusion based on these data is that the influence of rolling temperature is far greater in the low carbon samples than in the IF grade. Indeed, the IF results alter only marginally with increasing temperature. In the low carbon grades, however, the rolling texture sharpens, recrystallization slows, the final grain size coarsens, and the recrystallization texture changes when the rolling temperature is increased. This distinct difference between the two steel types is explained in terms of their contrasting deformation behaviors. Solute carbon and nitrogen in the low carbon grades interact with dislocations causing high stored energy levels after low temperature rolling (due to dynamic strain aging) and high strain rate sensitivities during high temperature rolling (due to the solute drag of dislocations in the transition region between DSA and DRC). Nucleation during subsequent recrystallization is strongly influenced by both the stored energy and the strain rate sensitivity. The latter affects the occurrence of the flow localisations that enhance nucleation.
Quantification of soil water retention parameters using multi-section TDR-waveform analysis
NASA Astrophysics Data System (ADS)
Baviskar, S. M.; Heimovaara, T. J.
2017-06-01
Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.
NASA GRC and MSFC Space-Plasma Arc Testing Procedures
NASA Technical Reports Server (NTRS)
Ferguson, Dale C.; Vayner, Boris V.; Galofaro, Joel T.; Hillard, G. Barry; Vaughn, Jason; Schneider, Todd
2007-01-01
Tests of arcing and current collection in simulated space plasma conditions have been performed at the NASA Glenn Research Center (GRC) in Cleveland, Ohio, for over 30 years and at the Marshall Space Flight Center (MSFC) in Huntsville, Alabama, for almost as long. During this period, proper test conditions for accurate and meaningful space simulation have been worked out, comparisons with actual space performance in spaceflight tests and with real operational satellites have been made, and NASA has achieved our own internal standards for test protocols. It is the purpose of this paper to communicate the test conditions, test procedures, and types of analysis used at NASA GRC and MSFC to the space environmental testing community at large, to help with international space-plasma arcing-testing standardization. Discussed herein are neutral gas conditions, plasma densities and uniformity, vacuum chamber sizes, sample sizes and Debye lengths, biasing samples versus self-generated voltages, floating samples versus grounded samples, test electrical conditions, arc detection, preventing sustained discharges during testing, real samples versus idealized samples, validity of LEO tests for GEO samples, extracting arc threshold information from arc rate versus voltage tests, snapover, current collection, and glows at positive sample bias, Kapton pyrolysis, thresholds for trigger arcs, sustained arcs, dielectric breakdown and Paschen discharge, tether arcing and testing in very dense plasmas (i.e. thruster plumes), arc mitigation strategies, charging mitigation strategies, models, and analysis of test results. Finally, the necessity of testing will be emphasized, not to the exclusion of modeling, but as part of a complete strategy for determining when and if arcs will occur, and preventing them from occurring in space.
Impact of asymmetrical flow field-flow fractionation on protein aggregates stability.
Bria, Carmen R M; Williams, S Kim Ratanathanawongs
2016-09-23
The impact of asymmetrical flow field-flow fractionation (AF4) on protein aggregate species is investigated with the aid of multiangle light scattering (MALS) and dynamic light scattering (DLS). The experimental parameters probed in this study include aggregate stability in different carrier liquids, shear stress (related to sample injection), sample concentration (during AF4 focusing), and sample dilution (during separation). Two anti-streptavidin (anti-SA) IgG1 samples composed of low and high molar mass (M) aggregates are subjected to different AF4 conditions. Aggregates suspended and separated in phosphate buffer are observed to dissociate almost entirely to monomer. However, aggregates in citric acid buffer are partially stable with dissociation to 25% and 5% monomer for the low and high M samples, respectively. These results demonstrate that different carrier liquids change the aggregate stability and low M aggregates can behave differently than their larger counterparts. Increasing the duration of the AF4 focusing step showed no significant changes in the percent monomer, percent aggregates, or the average Ms in either sample. Syringe-induced shear related to sample injection resulted in an increase in hydrodynamic diameter (dh) as measured by batch mode DLS. Finally, calculations showed that dilution during AF4 separation is significantly lower than in size exclusion chromatography with dilution occurring mainly at the AF4 channel outlet and not during the separation. This has important ramifications when analyzing aggregates that rapidly dissociate (<∼2s) upon dilution as the size calculated by AF4 theory may be more accurate than that measured by online DLS. Experimentally, the dhs determined by online DLS generally agreed with AF4 theory except for the more well retained larger aggregates for which DLS showed smaller sizes. These results highlight the importance of using AF4 retention theory to understand the impacts of dilution on analytes. Copyright © 2016 Elsevier B.V. All rights reserved.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
2011-01-01
This work describes the methodology used to assess a strategy for implementing clinical practice guidelines (CPG) for cardiovascular risk control in a health area of Madrid. Background The results on clinical practice of introducing CPGs have been little studied in Spain. The strategy used to implement a CPG is known to influence its final use. Strategies based on the involvement of opinion leaders and that are easily executed appear to be among the most successful. Aim The main aim of the present work was to compare the effectiveness of two strategies for implementing a CPG designed to reduce cardiovascular risk in the primary healthcare setting, measured in terms of improvements in the recording of calculated cardiovascular risk or specific risk factors in patients' medical records, the control of cardiovascular risk factors, and the incidence of cardiovascular events. Methods This study involved a controlled, blinded community intervention in which the 21 health centres of the Number 2 Health Area of Madrid were randomly assigned by clusters to be involved in either a proposed CPG implementation strategy to reduce cardiovascular risk, or the normal dissemination strategy. The study subjects were patients ≥ 45 years of age whose health cards showed them to belong to the studied health area. The main variable examined was the proportion of patients whose medical histories included the calculation of their cardiovascular risk or that explicitly mentioned the presence of variables necessary for its calculation. The sample size was calculated for a comparison of proportions with alpha = 0.05 and beta = 0.20, and assuming that the intervention would lead to a 15% increase in the measured variables. Corrections were made for the design effect, assigning a sample size to each cluster proportional to the size of the population served by the corresponding health centre, and assuming losses of 20%. This demanded a final sample size of 620 patients. Data were analysed using summary measures for each cluster, both in making estimates and for hypothesis testing. Analysis of the variables was made on an intention-to-treat basis. Trial Registration ClinicalTrials.gov: NCT01270022 PMID:21504570
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Raad, Markus; de Rond, Tristan; Rübel, Oliver
Mass spectrometry imaging (MSI) has primarily been applied in localizing biomolecules within biological matrices. Although well-suited, the application of MSI for comparing thousands of spatially defined spotted samples has been limited. One reason for this is a lack of suitable and accessible data processing tools for the analysis of large arrayed MSI sample sets. In this paper, the OpenMSI Arrayed Analysis Toolkit (OMAAT) is a software package that addresses the challenges of analyzing spatially defined samples in MSI data sets. OMAAT is written in Python and is integrated with OpenMSI (http://openmsi.nersc.gov), a platform for storing, sharing, and analyzing MSI data.more » By using a web-based python notebook (Jupyter), OMAAT is accessible to anyone without programming experience yet allows experienced users to leverage all features. OMAAT was evaluated by analyzing an MSI data set of a high-throughput glycoside hydrolase activity screen comprising 384 samples arrayed onto a NIMS surface at a 450 μm spacing, decreasing analysis time >100-fold while maintaining robust spot-finding. The utility of OMAAT was demonstrated for screening metabolic activities of different sized soil particles, including hydrolysis of sugars, revealing a pattern of size dependent activities. Finally, these results introduce OMAAT as an effective toolkit for analyzing spatially defined samples in MSI. OMAAT runs on all major operating systems, and the source code can be obtained from the following GitHub repository: https://github.com/biorack/omaat.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qualheim, B.J.
This report presents the results of the geochemical reconnaissance sampling in the Kingman 1 x 2 quadrangle of the National Topographical Map Series (NTMS). Wet and dry sediment samples were collected throughout the 18,770-km arid to semiarid area and water samples at available streams, springs, and wells. Neutron activation analysis of uranium and trace elements and other measurements made in the field and laboratory are presented in tabular hardcopy and microfiche format. The report includes five full-size overlays for use with the Kingman NTMS 1 : 250,000 quadrangle. Water sampling sites, water sample uranium concentrations, water-sample conductivity, sediment sampling sites,more » and sediment-sample total uranium and thorium concentrations are shown on the separate overlays. General geological and structural descriptions of the area are included and known uranium occurrences on this quadrangle are delineated. Results of the reconnaissance are briefly discussed and related to rock types in the final section of the report. The results are suggestive of uranium mineralization in only two areas: the Cerbat Mountains and near some of the western intrusives.« less
Edris, Amr E
2012-09-01
ABSTRACT The objective of the present investigation is to formulate commercial soybean lecithin as nanoparticles in solvent-free aqueous system for potential supplementary applications. A mechanical method, which involved two major steps, was used for that purpose. First, lecithin submicron particles (~ 0.5 μm) have been prepared by gradual hydration of lecithin powder using mechanical agitation. Finally, the size of these particles was further reduced to < 100 nm by using high-pressure microfluidization. The physical stability (appearance, particle size distribution, ζ-potential) and the chemical stability (lipid oxidation) of the dispersions carrying lecithin nanoparticles were assessed every 15 days during the 3-month shelf life period at two different temperatures. Results showed that the final particle size of lecithin in the freshly prepared aqueous dispersion was 79.8 ± 1.0 nm and the amount of peroxide detected was 3.5 ± 0.2 meq/kg lipid. At the end of the storage period, dispersions stored at 4°C exhibited physical and chemical stability as evident from the translucent appearance, the small change in particle size (84.1 ± 1.3 nm), and the small amount of generated peroxides (4.1 ± 0.2 meq/kg lipid). On the other hand, dispersions stored at 25°C were physically stable up to 60 days. Over that period, samples became turbid and the particle size increased to 145.0 ± 1.7 nm with a bimodal distribution pattern. This behavior was due to phospholipids (PLs) degradation and hydrolysis under acidic conditions, which proceeds faster at a relatively high temperature (25°C) than at (4°C). The outcome of this investigation may help in developing water-based dispersions carrying lecithin nanoparticles for dietary supplement of PLs.
CALIFA: a diameter-selected sample for an integral field spectroscopy galaxy survey
NASA Astrophysics Data System (ADS)
Walcher, C. J.; Wisotzki, L.; Bekeraité, S.; Husemann, B.; Iglesias-Páramo, J.; Backsmann, N.; Barrera Ballesteros, J.; Catalán-Torrecilla, C.; Cortijo, C.; del Olmo, A.; Garcia Lorenzo, B.; Falcón-Barroso, J.; Jilkova, L.; Kalinova, V.; Mast, D.; Marino, R. A.; Méndez-Abreu, J.; Pasquali, A.; Sánchez, S. F.; Trager, S.; Zibetti, S.; Aguerri, J. A. L.; Alves, J.; Bland-Hawthorn, J.; Boselli, A.; Castillo Morales, A.; Cid Fernandes, R.; Flores, H.; Galbany, L.; Gallazzi, A.; García-Benito, R.; Gil de Paz, A.; González-Delgado, R. M.; Jahnke, K.; Jungwiert, B.; Kehrig, C.; Lyubenova, M.; Márquez Perez, I.; Masegosa, J.; Monreal Ibero, A.; Pérez, E.; Quirrenbach, A.; Rosales-Ortega, F. F.; Roth, M. M.; Sanchez-Blazquez, P.; Spekkens, K.; Tundo, E.; van de Ven, G.; Verheijen, M. A. W.; Vilchez, J. V.; Ziegler, B.
2014-09-01
We describe and discuss the selection procedure and statistical properties of the galaxy sample used by the Calar Alto Legacy Integral Field Area (CALIFA) survey, a public legacy survey of 600 galaxies using integral field spectroscopy. The CALIFA "mother sample" was selected from the Sloan Digital Sky Survey (SDSS) DR7 photometric catalogue to include all galaxies with an r-band isophotal major axis between 45'' and 79.2'' and with a redshift 0.005 < z < 0.03. The mother sample contains 939 objects, 600 of which will be observed in the course of the CALIFA survey. The selection of targets for observations is based solely on visibility and thus keeps the statistical properties of the mother sample. By comparison with a large set of SDSS galaxies, we find that the CALIFA sample is representative of galaxies over a luminosity range of -19 > Mr > -23.1 and over a stellar mass range between 109.7 and 1011.4 M⊙. In particular, within these ranges, the diameter selection does not lead to any significant bias against - or in favour of - intrinsically large or small galaxies. Only below luminosities of Mr = -19 (or stellar masses <109.7 M⊙) is there a prevalence of galaxies with larger isophotal sizes, especially of nearly edge-on late-type galaxies, but such galaxies form <10% of the full sample. We estimate volume-corrected distribution functions in luminosities and sizes and show that these are statistically fully compatible with estimates from the full SDSS when accounting for large-scale structure. For full characterization of the sample, we also present a number of value-added quantities determined for the galaxies in the CALIFA sample. These include consistent multi-band photometry based on growth curve analyses; stellar masses; distances and quantities derived from these; morphological classifications; and an overview of available multi-wavelength photometric measurements. We also explore different ways of characterizing the environments of CALIFA galaxies, finding that the sample covers environmental conditions from the field to genuine clusters. We finally consider the expected incidence of active galactic nuclei among CALIFA galaxies given the existing pre-CALIFA data, finding that the final observed CALIFA sample will contain approximately 30 Sey2 galaxies. Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max Planck Institute for Astronomy and the Instituto de Astrofísica de Andalucía (CSIC). Publically released data products from CALIFA are made available on the webpage http://www.caha.es/CALIFA
Comparative analysis of the antioxidant properties of Icelandic and Hawaiian lichens.
Hagiwara, Kehau; Wright, Patrick R; Tabandera, Nicole K; Kelman, Dovi; Backofen, Rolf; Ómarsdóttir, Sesselja; Wright, Anthony D
2016-09-01
Antioxidant activity of symbiotic organisms known as lichens is an intriguing field of research because of its strong contribution to their ability to withstand extremes of physical and biological stress (e.g. desiccation, temperature, UV radiation and microbial infection). We present a comparative study on the antioxidant activities of 76 Icelandic and 41 Hawaiian lichen samples assessed employing the DPPH- and FRAP-based antioxidant assays. Utilizing this unprecedented sample size, we show that while highest individual sample activity is present in the Icelandic dataset, the overall antioxidant activity is higher for lichens found in Hawaii. Furthermore, we report that lichens from the genus Peltigera that have been described as strong antioxidant producers in studies on Chinese, Russian and Turkish lichens also show high antioxidant activities in both Icelandic and Hawaiian lichen samples. Finally, we show that opportunistic sampling of lichens in both Iceland and Hawaii will yield high numbers of lichen species that exclusively include green algae as photobiont. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.
Aad, G.; Abbott, B.; Abdallah, J.; ...
2012-03-13
Pseudorapidity gap distributions in proton-proton collisions at s√=7 ~TeVs=7 ~TeV are studied using a minimum bias data sample with an integrated luminosity of 7.1 μb -1. Cross sections are measured differentially in terms of Δη F, the larger of the pseudorapidity regions extending to the limits of the ATLAS sensitivity, at η= ±4.9, in which no final state particles are produced above a transverse momentum threshold pmore » $$cut\\atop{T}$$ . The measurements span the region 0F < 8 for 200~MeV< p$$cut\\atop{T}$$ <800~MeV200~MeVF, the data test the reliability of hadronisation models in describing rapidity and transverse momentum fluctuations in final state particle production. The measurements at larger gap sizes are dominated by contributions from the single diffractive dissociation process (pp→Xp), enhanced by double dissociation (pp→XY) where the invariant mass of the lighter of the two dissociation systems satisfies M Y≲7 GeV. The resulting cross section is dσ/dΔη F≈1 mb for Δη F≳3. The large rapidity gap data are used to constrain the value of the Pomeron intercept appropriate to triple Regge models of soft diffraction. Finally, the cross section integrated over all gap sizes is compared with other LHC inelastic cross section measurements.« less
NASA Astrophysics Data System (ADS)
Ge, Yunfei; Zhang, Yuan; Weaver, Jonathan M. R.; Dobson, Phillip S.
2017-12-01
Scanning thermal microscopy (SThM) is a technique which is often used for the measurement of the thermal conductivity of materials at the nanometre scale. The impact of nano-scale feature size and shape on apparent thermal conductivity, as measured using SThM, has been investigated. To achieve this, our recently developed topography-free samples with 200 and 400 nm wide gold wires (50 nm thick) of length of 400-2500 nm were fabricated and their thermal resistance measured and analysed. This data was used in the development and validation of a rigorous but simple heat transfer model that describes a nanoscopic contact to an object with finite shape and size. This model, in combination with a recently proposed thermal resistance network, was then used to calculate the SThM probe signal obtained by measuring these features. These calculated values closely matched the experimental results obtained from the topography-free sample. By using the model to analyse the dimensional dependence of thermal resistance, we demonstrate that feature size and shape has a significant impact on measured thermal properties that can result in a misinterpretation of material thermal conductivity. In the case of a gold nanowire embedded within a silicon nitride matrix it is found that the apparent thermal conductivity of the wire appears to be depressed by a factor of twenty from the true value. These results clearly demonstrate the importance of knowing both probe-sample thermal interactions and feature dimensions as well as shape when using SThM to quantify material thermal properties. Finally, the new model is used to identify the heat flux sensitivity, as well as the effective contact size of the conventional SThM system used in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harb, J.N.
This report describes work performed in the fifteenth quarter of a fundamental study to examine the effect of staged combustion on ash formation and deposition. Efforts this quarter included addition of a new cyclone for improved particle sampling and modification of the existing sampling probe. Particulate samples were collected under a variety of experimental conditions for both coals under investigation. Deposits formed from the Black Thunder coal were also collected. Particle size and composition from the Pittsburgh No. 8 ash samples support previously reported results. In addition, the authors ability to distinguish char/ash associations has been refined and applied tomore » a variety of ash samples from this coal. The results show a clear difference between the behavior of included and excluded pyrite, and provide insight into the extent of pyrite oxidation. Ash samples from the Black Thunder coal have also been collected and analyzed. Results indicate a significant difference in the particle size of {open_quotes}unclassifiable{close_quotes} particles for ash formed during staged combustion. A difference in composition also appears to be present and is currently under investigation. Finally, deposits were collected under staged conditions for the Black Thunder coal. Specifically, two deposits were formed under similar conditions and allowed to mature under either reducing or oxidizing conditions in natural gas. Differences between the samples due to curing were noted. In addition, both deposits showed skeletal ash structures which resulted from in-situ burnout of the char after deposition.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabooni, S., E-mail: s.sabooni@ma.iut.ac.ir; Karimzadeh, F.; Enayati, M.H.
In the present study, an ultrafine grained (UFG) AISI 304L stainless steel with the average grain size of 650 nm was successfully welded by both gas tungsten arc welding (GTAW) and friction stir welding (FSW). GTAW was applied without any filler metal. FSW was also performed at a constant rotational speed of 630 rpm and different welding speeds from 20 to 80 mm/min. Microstructural characterization was carried out by High Resolution Scanning Electron Microscopy (HRSEM) with Electron Backscattered Diffraction (EBSD) and Transmission Electron Microscopy (TEM). Nanoindentation, microhardness measurements and tensile tests were also performed to study the mechanical properties ofmore » the base metal and weldments. The results showed that the solidification mode in the GTAW welded sample is FA (ferrite–austenite) type with the microstructure consisting of an austenite matrix embedded with lath type and skeletal type ferrite. The nugget zone microstructure in the FSW welded samples consisted of equiaxed dynamically recrystallized austenite grains with some amount of elongated delta ferrite. Sigma phase precipitates were formed in the region ahead the rotating tool during the heating cycle of FSW, which were finally fragmented into nanometric particles and distributed in the weld nugget. Also there is a high possibility that the existing delta ferrite in the microstructure rapidly transforms into sigma phase particles during the short thermal cycle of FSW. These suggest that high strain and deformation during FSW can promote sigma phase formation. The final austenite grain size in the nugget zone was found to decrease with increasing Zener–Hollomon parameter, which was obtained quantitatively by measuring the peak temperature, calculating the strain rate during FSW and exact examination of hot deformation activation energy by considering the actual grain size before the occurrence of dynamic recrystallization. Mechanical properties observations showed that the welding efficiency of the FSW welded sample is around 70%, which is more than 20% higher than the GTAW welded sample. - Highlights: • Microstructure and mechanical properties of UFG 304L stainless steel were studied during GTAW and FSW. • Sigma phase formation mechanism was studied during FSW of 304L stainless steel. • THERMOCALC analysis was performed to obtain possible formation temperatures for sigma phase. • Nano-mechanical twins were found in the TMAZ region.« less
Chefs' opinions of restaurant portion sizes.
Condrasky, Marge; Ledikwe, Jenny H; Flood, Julie E; Rolls, Barbara J
2007-08-01
The objectives were to determine who establishes restaurant portion sizes and factors that influence these decisions, and to examine chefs' opinions regarding portion size, nutrition information, and weight management. A survey was distributed to chefs to obtain information about who is responsible for determining restaurant portion sizes, factors influencing restaurant portion sizes, what food portion sizes are being served in restaurants, and chefs' opinions regarding nutrition information, health, and body weight. The final sample consisted of 300 chefs attending various culinary meetings. Executive chefs were identified as being primarily responsible for establishing portion sizes served in restaurants. Factors reported to have a strong influence on restaurant portion sizes included presentation of foods, food cost, and customer expectations. While 76% of chefs thought that they served "regular" portions, the actual portions of steak and pasta they reported serving were 2 to 4 times larger than serving sizes recommended by the U.S government. Chefs indicated that they believe that the amount of food served influences how much patrons consume and that large portions are a problem for weight control, but their opinions were mixed regarding whether it is the customer's responsibility to eat an appropriate amount when served a large portion of food. Portion size is a key determinant of energy intake, and the results from this study suggest that cultural norms and economic value strongly influence the determination of restaurant portion sizes. Strategies are needed to encourage chefs to provide and promote portions that are appropriate for customers' energy requirements.
Lara, Jesus R; Hoddle, Mark S
2015-08-01
Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hollow metal nanostructures for enhanced plasmonics (Conference Presentation)
NASA Astrophysics Data System (ADS)
Genç, Aziz; Patarroyo, Javier; Sancho-Parramon, Jordi; Duchamp, Martial; Gonzalez, Edgar; Bastus, Neus G.; Houben, Lothar; Dunin-Borkowski, Rafal; Puntes, Victor F.; Arbiol, Jordi
2016-03-01
Complex metal nanoparticles offer a great playground for plasmonic nanoengineering, where it is possible to cover plasmon resonances from ultraviolet to near infrared by modifying the morphologies from solid nanocubes to nanoframes, multiwalled hollow nanoboxes or even nanotubes with hybrid (alternating solid and hollow) structures. We experimentally show that structural modifications, i.e. void size and final morphology, are the dominant determinants for the final plasmonic properties, while compositional variations allow us to get a fine tuning. EELS mappings of localized surface plasmon resonances (LSPRs) reveal an enhanced plasmon field inside the voids of hollow AuAg nanostructures along with a more homogeneous distributions of the plasmon fields around the nanostructures. With the present methodology and the appropriate samples we are able to compare the effects of hybridization at the nanoscale in hollow nanostructures. Boundary element method (BEM) simulations also reveal the effects of structural nanoengineering on plasmonic properties of hollow metal nanostructures. Possibility of tuning the LSPR properties of hollow metal nanostructures in a wide range of energy by modifying the void size/shell thickness is shown by BEM simulations, which reveals that void size is the dominant factor for tuning the LSPRs. As a proof of concept for enhanced plasmonic properties, we show effective label free sensing of bovine serum albumin (BSA) with some of our hollow nanostructures. In addition, the different plasmonic modes observed have also been studied and mapped in 3D.
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
Diffusing wave spectroscopy studies of gelling systems
NASA Astrophysics Data System (ADS)
Horne, David S.
1991-06-01
The recognition that the transmission of light through a concentrated, opaque system can be treated as a diffusion process has extended the application of photon correlation techniques to the study of particle size, mobility and interactions in such systems. Solutions of the photon diffusion equation are sensitive to the boundary conditions imposed by the geometry of the scattering apparatus. The apparatus, incorporating a bifurcated fiber optic bundle for light transmission between source, sample and detector, takes advantage of the particularly simple solution for a back-scattering configuration. Its ability to measure particle size using monodisperse polystyrene latices and to respond to concentration dependent particle interactions in a study of casein micelle mobility in skim and concentrated milks is demonstrated. Finally, the changes in dynamic light scattering behavior occurring during colloidal gel formation are described and discussed.
Rodrigues, Nils; Weiskopf, Daniel
2018-01-01
Conventional dot plots use a constant dot size and are typically applied to show the frequency distribution of small data sets. Unfortunately, they are not designed for a high dynamic range of frequencies. We address this problem by introducing nonlinear dot plots. Adopting the idea of nonlinear scaling from logarithmic bar charts, our plots allow for dots of varying size so that columns with a large number of samples are reduced in height. For the construction of these diagrams, we introduce an efficient two-way sweep algorithm that leads to a dense and symmetrical layout. We compensate aliasing artifacts at high dot densities by a specifically designed low-pass filtering method. Examples of nonlinear dot plots are compared to conventional dot plots as well as linear and logarithmic histograms. Finally, we include feedback from an expert review.
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-11-01
The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.
NASA Astrophysics Data System (ADS)
Ni, W.; Zhang, Z.; Sun, G.
2017-12-01
Several large-scale maps of forest AGB have been released [1] [2] [3]. However, these existing global or regional datasets were only approximations based on combining land cover type and representative values instead of measurements of actual forest aboveground biomass or forest heights [4]. Rodríguez-Veiga et al[5] reported obvious discrepancies of existing forest biomass stock maps with in-situ observations in Mexico. One of the biggest challenges to the credibility of these maps comes from the scale gaps between the size of field sampling plots used to develop(or validate) estimation models and the pixel size of these maps and the availability of field sampling plots with sufficient size for the verification of these products [6]. It is time-consuming and labor-intensive to collect sufficient number of field sampling data over the plot size of the same as resolutions of regional maps. The smaller field sampling plots cannot fully represent the spatial heterogeneity of forest stands as shown in Figure 1. Forest AGB is directly determined by forest heights, diameter at breast height (DBH) of each tree, forest density and tree species. What measured in the field sampling are the geometrical characteristics of forest stands including the DBH, tree heights and forest densities. The LiDAR data is considered as the best dataset for the estimation of forest AGB. The main reason is that LiDAR can directly capture geometrical features of forest stands by its range detection capabilities.The remotely sensed dataset, which is capable of direct measurements of forest spatial structures, may serve as a ladder to bridge the scale gaps between the pixel size of regional maps of forest AGB and field sampling plots. Several researches report that TanDEM-X data can be used to characterize the forest spatial structures [7, 8]. In this study, the forest AGB map of northeast China were produced using ALOS/PALSAR data taking TanDEM-X data as a bridges. The TanDEM-X InSAR data used in this study and forest AGB map was shown in Figure 2. The technique details and further analysis will be given in the final report. AcknowledgmentThis work was supported in part by the National Basic Research Program of China (Grant No. 2013CB733401, 2013CB733404), and in part by the National Natural Science Foundation of China (Grant Nos. 41471311, 41371357, 41301395).
Size distribution of extracellular vesicles by optical correlation techniques.
Montis, Costanza; Zendrini, Andrea; Valle, Francesco; Busatto, Sara; Paolini, Lucia; Radeghieri, Annalisa; Salvatore, Annalisa; Berti, Debora; Bergese, Paolo
2017-10-01
Understanding the colloidal properties of extracellular vesicles (EVs) is key to advance fundamental knowledge in this field and to develop effective EV-based diagnostics, therapeutics and devices. Determination of size distribution and of colloidal stability of purified EVs resuspended in buffered media is a complex and challenging issue - because of the wide range of EV diameters (from 30 to 2000nm), concentrations of interest and membrane properties, and the possible presence of co-isolated contaminants with similar size and densities, such as protein aggregates and fat globules - which is still waiting to be fully addressed. We report here a fully detailed protocol for accurate and robust determination of the size distribution and stability of EV samples which leverages a dedicated combination of Fluorescence Correlation Spectroscopy (FCS) and Dynamic Light Scattering (DLS). The theoretical background, critical experimental steps and data analysis procedures are thoroughly presented and finally illustrated through the representative case study of EV formulations obtained from culture media of B16 melanoma cells, a murine tumor cell line used as a model for human skin cancers. Copyright © 2017 Elsevier B.V. All rights reserved.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Kubo, Takuya; Nishimura, Naoki; Furuta, Hayato; Kubota, Kei; Naito, Toyohiro; Otsuka, Koji
2017-11-10
We report novel capillary gel electrophoresis (CGE) with poly(ethylene glycol) (PEG)-based hydrogels for the effective separations of biomolecules containing sugars and DNAs based on a molecular size effect. The gel capillaries were prepared in a fused silica capillary modified with 3-(trimethoxysilyl)propylmethacrylate using a variety of the PEG-based hydrogels. After the fundamental evaluations in CGE regarding the separation based on the molecular size effect depending on the crosslinking density, the optimized capillary provided the efficient separation of glucose ladder (G1 to G20). In addition, another capillary showed the successful separation of DNA ladder in the range of 10-1100 base pair, which is superior to an authentic acrylamide-based gel capillary. For both glucose and DNA ladders, the separation ranges against the molecular size were simply controllable by alteration of the concentration and/or units of ethylene oxide in the PEG-based crosslinker. Finally, we demonstrated the separations of real samples, which included sugars carved out from monoclonal antibodies, mAbs, and then the efficient separations based on the molecular size effect were achieved. Copyright © 2017 Elsevier B.V. All rights reserved.
Shock and Release Behaviour of Silica Based Granular Materials
NASA Astrophysics Data System (ADS)
Braithwaite, Chris; Perry, James; Taylor, Nicholas
2017-06-01
A large number of experiments have been conducted using the Cavendish single stage gas gun to investigate the dynamic properties of sand. The results included successful measurements of release in dry materials, demonstrating that this is markedly different to the loading path. The effect of moisture was examined and shown to be strongest where the material was close to saturation, at which point the microstructure of the exact sample configuration plays a significant role in the response. Finally, the effect of sample morphology was probed, and whilst it was found to be significant at low rates, in the shock regime impedance appears to be more strongly influenced by the presence of moisture or a fraction of small particle size debris.
Predicting discovery rates of genomic features.
Gravel, Simon
2014-06-01
Successful sequencing experiments require judicious sample selection. However, this selection must often be performed on the basis of limited preliminary data. Predicting the statistical properties of the final sample based on preliminary data can be challenging, because numerous uncertain model assumptions may be involved. Here, we ask whether we can predict "omics" variation across many samples by sequencing only a fraction of them. In the infinite-genome limit, we find that a pilot study sequencing 5% of a population is sufficient to predict the number of genetic variants in the entire population within 6% of the correct value, using an estimator agnostic to demography, selection, or population structure. To reach similar accuracy in a finite genome with millions of polymorphisms, the pilot study would require ∼15% of the population. We present computationally efficient jackknife and linear programming methods that exhibit substantially less bias than the state of the art when applied to simulated data and subsampled 1000 Genomes Project data. Extrapolating based on the National Heart, Lung, and Blood Institute Exome Sequencing Project data, we predict that 7.2% of sites in the capture region would be variable in a sample of 50,000 African Americans and 8.8% in a European sample of equal size. Finally, we show how the linear programming method can also predict discovery rates of various genomic features, such as the number of transcription factor binding sites across different cell types. Copyright © 2014 by the Genetics Society of America.
NASA Astrophysics Data System (ADS)
Nelson, Erica June; van Dokkum, Pieter G.; Brammer, Gabriel; Förster Schreiber, Natascha; Franx, Marijn; Fumagalli, Mattia; Patel, Shannon; Rix, Hans-Walter; Skelton, Rosalind E.; Bezanson, Rachel; Da Cunha, Elisabete; Kriek, Mariska; Labbe, Ivo; Lundgren, Britt; Quadri, Ryan; Schmidt, Kasper B.
2012-03-01
We investigate the buildup of galaxies at z ~ 1 using maps of Hα and stellar continuum emission for a sample of 57 galaxies with rest-frame Hα equivalent widths >100 Å in the 3D-HST grism survey. We find that the Hα emission broadly follows the rest-frame R-band light but that it is typically somewhat more extended and clumpy. We quantify the spatial distribution with the half-light radius. The median Hα effective radius re (Hα) is 4.2 ± 0.1 kpc but the sizes span a large range, from compact objects with re (Hα) ~ 1.0 kpc to extended disks with re (Hα) ~ 15 kpc. Comparing Hα sizes to continuum sizes, we find
Computational fluid dynamics (CFD) studies of a miniaturized dissolution system.
Frenning, G; Ahnfelt, E; Sjögren, E; Lennernäs, H
2017-04-15
Dissolution testing is an important tool that has applications ranging from fundamental studies of drug-release mechanisms to quality control of the final product. The rate of release of the drug from the delivery system is known to be affected by hydrodynamics. In this study we used computational fluid dynamics to simulate and investigate the hydrodynamics in a novel miniaturized dissolution method for parenteral formulations. The dissolution method is based on a rotating disc system and uses a rotating sample reservoir which is separated from the remaining dissolution medium by a nylon screen. Sample reservoirs of two sizes were investigated (SR6 and SR8) and the hydrodynamic studies were performed at rotation rates of 100, 200 and 400rpm. The overall fluid flow was similar for all investigated cases, with a lateral upward spiraling motion and central downward motion in the form of a vortex to and through the screen. The simulations indicated that the exchange of dissolution medium between the sample reservoir and the remaining release medium was rapid for typical screens, for which almost complete mixing would be expected to occur within less than one minute at 400rpm. The local hydrodynamic conditions in the sample reservoirs depended on their size; SR8 appeared to be relatively more affected than SR6 by the resistance to liquid flow resulting from the screen. Copyright © 2017 Elsevier B.V. All rights reserved.
Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A
2016-01-01
Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.
Pre and post annealed low cost ZnO nanorods on seeded substrate
NASA Astrophysics Data System (ADS)
Nordin, M. N.; Kamil, Wan Maryam Wan Ahmad
2017-05-01
We wish to report the photonic band gap (where light is confined) in low cost ZnO nanorods created by two-step chemical bath deposition (CBD) method where the glass substrates were pre-treated with two different seeding thicknesses, 100 nm (sample a) and 150 nm (sample b), of ZnO using radio frequency magnetron sputtering. Then the samples were annealed at 600°C for 1 hour in air before and after immersed into the chemical solution for CBD process. To observe the presence of photonic band gap on the sample, UV-Visible-NIR spectrophotometer was utilized and showed that sample a and sample b both achieved wide band gap between 240 nm and 380 nm, within the UV range for typical ZnO, however sample b provided a better light confinement that may be attributed by the difference in average nanorods size. Field Emission Scanning Electron Microscope (FESEM) of the samples revealed better oriented nanorods uniformly scattered across the surface when substrates were coated with 100 nm of seeding layer whilst the 150 nm seeding sample showed a poor distribution of nanorods probably due to defects in the sample. Finally, the crystal structure of the ZnO crystallite is revealed by employing X-ray diffraction and both samples showed polycrystalline with hexagonal wurtzite structure that matched with JCPDS No. 36-1451. The 100 nm pre-seeded samples was recognized to have bigger average crystallite size, however sample b was suggested as having a higher crystalline quality. In conclusion, the sample b is recognized as a better candidate for future photonic applications due to its more apparent of photonic band gap and this may be contributed by more random distribution of the nanorods as observed in FESEM images as well as higher crystalline quality as suggested from XRD measurements.
Microstructural Quantification of Rapidly Solidified Undercooled D2 Tool Steel
NASA Astrophysics Data System (ADS)
Valloton, J.; Herlach, D. M.; Henein, H.; Sediako, D.
2017-10-01
Rapid solidification of D2 tool steel is investigated experimentally using electromagnetic levitation (EML) under terrestrial and reduced gravity conditions and impulse atomization (IA), a drop tube type of apparatus. IA produces powders 300 to 1400 μm in size. This allows the investigation of a large range of cooling rates ( 100 to 10,000 K/s) with a single experiment. On the other hand, EML allows direct measurements of the thermal history, including primary and eutectic nucleation undercoolings, for samples 6 to 7 mm in diameter. The final microstructures at room temperature consist of retained supersaturated austenite surrounded by eutectic of austenite and M7C3 carbides. Rapid solidification effectively suppresses the formation of ferrite in IA, while a small amount of ferrite is detected in EML samples. High primary phase undercoolings and high cooling rates tend to refine the microstructure, which results in a better dispersion of the eutectic carbides. Evaluation of the cell spacing in EML and IA samples shows that the scale of the final microstructure is mainly governed by coarsening. Electron backscattered diffraction (EBSD) analysis of IA samples reveals that IA powders are polycrystalline, regardless of the solidification conditions. EBSD on EML samples reveals strong differences between the microstructure of droplets solidified on the ground and in microgravity conditions. While the former ones are polycrystalline with many different grains, the EML sample solidified in microgravity shows a strong texture with few much larger grains having twinning relationships. This indicates that fluid flow has a strong influence on grain refinement in this system.
Serial reconstruction of order and serial recall in verbal short-term memory.
Quinlan, Philip T; Roodenrys, Steven; Miller, Leonie M
2017-10-01
We carried out a series of experiments on verbal short-term memory for lists of words. In the first experiment, participants were tested via immediate serial recall, and word frequency and list set size were manipulated. With closed lists, the same set of items was repeatedly sampled, and with open lists, no item was presented more than once. In serial recall, effects of word frequency and set size were found. When a serial reconstruction-of-order task was used, in a second experiment, robust effects of word frequency emerged, but set size failed to show an effect. The effects of word frequency in order reconstruction were further examined in two final experiments. The data from these experiments revealed that the effects of word frequency are robust and apparently are not exclusively indicative of output processes. In light of these findings, we propose a multiple-mechanisms account in which word frequency can influence both retrieval and preretrieval processes.
Light-scattering flow cytometry for identification and characterization of blood microparticles
NASA Astrophysics Data System (ADS)
Konokhova, Anastasiya I.; Yurkin, Maxim A.; Moskalensky, Alexander E.; Chernyshev, Andrei V.; Tsvetovskaya, Galina A.; Chikova, Elena D.; Maltsev, Valeri P.
2012-05-01
We describe a novel approach to study blood microparticles using the scanning flow cytometer, which measures light scattering patterns (LSPs) of individual particles. Starting from platelet-rich plasma, we separated spherical microparticles from non-spherical plasma constituents, such as platelets and cell debris, based on similarity of their LSP to that of sphere. This provides a label-free method for identification (detection) of microparticles, including those larger than 1 μm. Next, we rigorously characterized each measured particle, determining its size and refractive index including errors of these estimates. Finally, we employed a deconvolution algorithm to determine size and refractive index distributions of the whole population of microparticles, accounting for largely different reliability of individual measurements. Developed methods were tested on a blood sample of a healthy donor, resulting in good agreement with literature data. The only limitation of this approach is size detection limit, which is currently about 0.5 μm due to used laser wavelength of 0.66 μm.
Dukic, Maja; Adams, Jonathan D.; Fantner, Georg E.
2015-01-01
Optical beam deflection (OBD) is the most prevalent method for measuring cantilever deflections in atomic force microscopy (AFM), mainly due to its excellent noise performance. In contrast, piezoresistive strain-sensing techniques provide benefits over OBD in readout size and the ability to image in light-sensitive or opaque environments, but traditionally have worse noise performance. Miniaturisation of cantilevers, however, brings much greater benefit to the noise performance of piezoresistive sensing than to OBD. In this paper, we show both theoretically and experimentally that by using small-sized piezoresistive cantilevers, the AFM imaging noise equal or lower than the OBD readout noise is feasible, at standard scanning speeds and power dissipation. We demonstrate that with both readouts we achieve a system noise of ≈0.3 Å at 20 kHz measurement bandwidth. Finally, we show that small-sized piezoresistive cantilevers are well suited for piezoresistive nanoscale imaging of biological and solid state samples in air. PMID:26574164
Gas permeability of ice-templated, unidirectional porous ceramics
NASA Astrophysics Data System (ADS)
Seuba, Jordi; Deville, Sylvain; Guizard, Christian; Stevenson, Adam J.
2016-01-01
We investigate the gas flow behavior of unidirectional porous ceramics processed by ice-templating. The pore volume ranged between 54% and 72% and pore size between 2.9 ?m and 19.1 ?m. The maximum permeability (?? m?) was measured in samples with the highest total pore volume (72%) and pore size (19.1 ?m). However, we demonstrate that it is possible to achieve a similar permeability (?? m?) at 54% pore volume by modification of the pore shape. These results were compared with those reported and measured for isotropic porous materials processed by conventional techniques. In unidirectional porous materials tortuosity (?) is mainly controlled by pore size, unlike in isotropic porous structures where ? is linked to pore volume. Furthermore, we assessed the applicability of Ergun and capillary model in the prediction of permeability and we found that the capillary model accurately describes the gas flow behavior of unidirectional porous materials. Finally, we combined the permeability data obtained here with strength data for these materials to establish links between strength and permeability of ice-templated materials.
NASA Astrophysics Data System (ADS)
Chen, Shuyi; Lu, Huigong; Wu, Yi-nan; Gu, Yifan; Li, Fengting; Morlay, Catherine
2016-09-01
Alumina-hercynite nano-spinel powders were prepared via one-step pyrolysis of iron-acetylacetone-doped Al-based metal-organic framework (MOF), i.e., MIL-53(Al). Organic ferric source, iron acetylacetone, was incorporated in situ into the framework of MIL-53(Al) during the solvothermal synthesis process. Under high-temperature pyrolysis, alumina derived from the MIL-53(Al) matrix and ferric oxides originated from the decomposition of organic ferric precursor in the framework were thermally converted into hercynite (FeAl2O4). The prepared samples were characterized using transmission electron microscopy, X-ray diffraction, N2 sorption, thermogravimetry, Raman spectroscopy and X-ray photoelectron spectroscopy. The final products were identified to be composed of alumina, hercynite and trace amounts of carbon depending on pyrolysis temperature. The experimental results showed that hercynite phase can be obtained and stabilized at low temperatures between 900 and 1100 °C under inert atmosphere. The final products were composed of nano-sized particles with an average size below 100 nm of individual crystal and specific surface areas of 18-49 m2 g-1.
NASA Astrophysics Data System (ADS)
De la Calle, Inmaculada; Menta, Mathieu; Séby, Fabienne
2016-11-01
Due to the increasing use of nanoparticles (NPs) in consumer products, it becomes necessary to develop different strategies for their detection, identification, characterization and quantification in a wide variety of samples. Since the analysis of NPs in consumer products and environmental samples is particularly troublesome, a detailed description of challenges and limitations is given here. This review mainly focuses on sample preparation procedures applied for the mostly used techniques for metallic and metal oxide NPs characterization in consumer products and most outstanding publications of biological and environmental samples (from 2006 to 2015). We summarize the procedures applied for total metal content, extraction/separation and/or preconcentration of NPs from the matrix, separation of metallic NPs from their ions or from larger particles and NPs' size fractionation. Sample preparation procedures specifically for microscopy are also described. Selected applications in cosmetics, food, other consumer products, biological tissues and environmental samples are presented. Advantages and inconveniences of those procedures are considered. Moreover, selected simplified schemes for NPs sample preparation, as well as usual techniques applied are included. Finally, promising directions for further investigations are discussed.
Silica dust exposure: Effect of filter size to compliance determination
NASA Astrophysics Data System (ADS)
Amran, Suhaily; Latif, Mohd Talib; Khan, Md Firoz; Leman, Abdul Mutalib; Goh, Eric; Jaafar, Shoffian Amin
2016-11-01
Monitoring of respirable dust was performed using a set of integrated sampling system consisting of sampling pump attached with filter media and separating device such as cyclone or special cassette. Based on selected method, filter sizes are either 25 mm or 37 mm poly vinyl chloride (PVC) filter. The aim of this study was to compare performance of two types of filter during personal respirable dust sampling for silica dust under field condition. The comparison strategy focused on the final compliance judgment based on both dataset. Eight hour parallel sampling of personal respirable dust exposure was performed among 30 crusher operators at six quarries. Each crusher operator was attached with parallel set of integrated sampling train containing either 25 mm or 37 mm PVC filter. Each set consisted of standard flow SKC sampler, attached with SKC GS3 cyclone and 2 pieces cassette loaded with 5.0 µm of PVC filter. Samples were analyzed by gravimetric technique. Personal respirable dust exposure between the two types of filters indicated significant positive correlation (p < 0.05) with moderate relationship (r2 = 0.6431). Personal exposure based on 25 mm PVC filter indicated 0.1% non-compliance to overall data while 37 mm PVC filter indicated similar findings at 0.4 %. Both data showed similar arithmetic mean(AM) and geometric mean(GM). In overall we concluded that personal respirable dust exposure either based on 25mm or 37mm PVC filter will give similar compliance determination. Both filters are reliable to be used in respirable dust monitoring for silica dust related exposure.
Shannon, Casey P; Chen, Virginia; Takhar, Mandeep; Hollander, Zsuzsanna; Balshaw, Robert; McManus, Bruce M; Tebbutt, Scott J; Sin, Don D; Ng, Raymond T
2016-11-14
Gene network inference (GNI) algorithms can be used to identify sets of coordinately expressed genes, termed network modules from whole transcriptome gene expression data. The identification of such modules has become a popular approach to systems biology, with important applications in translational research. Although diverse computational and statistical approaches have been devised to identify such modules, their performance behavior is still not fully understood, particularly in complex human tissues. Given human heterogeneity, one important question is how the outputs of these computational methods are sensitive to the input sample set, or stability. A related question is how this sensitivity depends on the size of the sample set. We describe here the SABRE (Similarity Across Bootstrap RE-sampling) procedure for assessing the stability of gene network modules using a re-sampling strategy, introduce a novel criterion for identifying stable modules, and demonstrate the utility of this approach in a clinically-relevant cohort, using two different gene network module discovery algorithms. The stability of modules increased as sample size increased and stable modules were more likely to be replicated in larger sets of samples. Random modules derived from permutated gene expression data were consistently unstable, as assessed by SABRE, and provide a useful baseline value for our proposed stability criterion. Gene module sets identified by different algorithms varied with respect to their stability, as assessed by SABRE. Finally, stable modules were more readily annotated in various curated gene set databases. The SABRE procedure and proposed stability criterion may provide guidance when designing systems biology studies in complex human disease and tissues.
Bioaerosols study in central Taiwan during summer season.
Wang, Chun-Chin; Fang, Guor-Cheng; Lee, LienYao
2007-04-01
Suspended particles, of which bioaerosols are one type, constitute one of the main reasons to cause severe air quality in Taiwan. Bioaerosols include allergens such as fungi, bacteria, actinomycetes, arthropods and protozoa, as well as microbial products such as mycotoxins, endotoxins and glucans. When allergens and microbial products are suspended in the air, local air quality will be influenced severely. In addition, when the particle size is small enough to pass through the respiratory tract entering the human body, the health of the local population is also threatened. Therefore, the purpose of this study attempted to understand the concentration and types of bacteria during summer period at four sampling sites in Taichung city, central Taiwan. The results indicated that total average bacterial concentration by using R2A medium incubated for 48 h were 7.3 x 10(2) and 1.2 x 10(3) cfu/m3 for Chung-Ming elementary sampling site during daytime and night-time period of summer season. In addition, total average bacterial concentration by using R2A medium incubated for 48 h were 2.2 x 10(3) and 2.5 x 10(3) cfu/m3 for Taichung refuse incineration plant sampling site during daytime and night-time period of summer season. As for Rice Field sampling site during daytime and night-time period of summer season, the results also reflected that the total average bacterial concentration by using R2A medium incubated for 48 h were 3.4 x 10(3) and 3.5 x 10(3) cfu/m3. Finally, total average bacterial concentration by using R2A medium incubated for 48 h were 1.6 x 10(3) and 1.9 x 10(3) cfu/m3 for Central Taiwan Science Park sampling site during daytime and night-time period of summer season. Moreover, the average bacterial concentration increased as the incubated time in a growth medium increased for particle sizes of 0.65-1.1, 1.1-2.1, 2.1-3.3, 3.3-4.7 and 4.7-7.0 microm. The total average bacterial concentration has no significant difference for day and night sampling period at any sampling site for the expression of bacterial concentration in term of order. The high average bacterial concentration was found in the particle size of 0.53-0.71 mm (average bioaerosol size was in the range of 2.1-4.7 microm) for each sampling site. Besides, there were exceeded 20 kinds of bacteria for each sampling site and the bacterial shape were rod, coccus and filamentous.
A thermal desorption mass spectrometer for freshly nucleated secondary aerosol particles
NASA Astrophysics Data System (ADS)
Held, A.; Gonser, S. G.
2012-04-01
Secondary aerosol formation in the atmosphere is observed in a large variety of locations worldwide, introducing new particles to the atmosphere which can grow to sizes relevant for health and climate effects of aerosols. The chemical reactions leading to atmospheric secondary aerosol formation are not yet fully understood. At the same time, analyzing the chemical composition of freshly nucleated particles is still a challenging task. We are currently finishing the development of a field portable aerosol mass spectrometer for nucleation particles with diameters smaller than 30 nm. This instrument consists of a custom-built aerosol sizing and collection unit coupled to a time-of-flight mass spectrometer (TOF-MS). The aerosol sizing and collection unit is composed of three major parts: (1) a unipolar corona aerosol charger, (2) a radial differential mobility analyzer (rDMA) for aerosol size separation, and (3) an electrostatic precipitator for aerosol collection. After collection, the aerosol sample is thermally desorbed, and the resulting gas sample is transferred to the TOF-MS for chemical analysis. The unipolar charger is based on corona discharge from carbon fibres (e.g. Han et al., 2008). This design allows efficient charging at voltages below 2 kV, thus eliminating the potential for ozone production which would interfere with the collected aerosol. With the current configuration the extrinsic charging efficiency for 20 nm particles is 32 %. The compact radial DMA similar to the design of Zhang et al. (1995) is optimized for a diameter range from 1 nm to 100 nm. Preliminary tests show that monodisperse aerosol samples (geometric standard deviation of 1.09) at 10 nm, 20 nm, and 30 nm can easily be separated from the ambient polydisperse aerosol population. Finally, the size-segregated aerosol sample is collected on a high-voltage biased metal filament. The collected sample is protected from contamination using a He sheath counterflow. Resistive heating of the filament allows temperature-controlled desorption of compounds of different volatility. We will present preliminary characterization experiments of the aerosol sizing and collection unit coupled to the mass spectrometer. Funding by the German Research Foundation (DFG) under grant DFG HE5214/3-1 is gratefully acknowledged. Han, B., Kim, H.J., Kim, Y.J., and Sioutas, C. (2008) Unipolar charging of ultrafine particles using carbon fiber ionizers. Aerosol Sci. Technol, 42, 793-800. Zhang, S.-H., Akutsu, Y., Russell, L.M., Flagan, R.C., and Seinfeld, J.H. (1995) Radial Differential Mobility Analyzer. Aerosol Sci. Technol, 23, 357-372.
Dating Studies of Elephant Tusks Using Accelerator Mass Spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sideras-Haddad, E; Brown, T A
A new method for determining the year of birth, the year of death, and hence, the age at death, of post-bomb and recently deceased elephants has been developed. The technique is based on Accelerator Mass Spectrometry radiocarbon analyses of small-sized samples extracted from along the length of a ge-line of an elephant tusk. The measured radiocarbon concentrations in the samples from a tusk can be compared to the {sup 14}C atmospheric bomb-pulse curve to derive the growth years of the initial and final samples from the tusk. Initial data from the application of this method to two tusks will bemore » presented. Potentially, the method may play a significant role in wildlife management practices of African national parks. Additionally, the method may contribute to the underpinnings of efforts to define new international trade regulations, which could, in effect, decrease poaching and the killing of very young animals.« less
Vázquez-Martínez, Guadalupe; Rodriguez, Mario H; Hernández-Hernández, Fidel; Ibarra, Jorge E
2004-04-01
An efficient strategy, based on a combination of procedures, was developed to obtain axenic cultures from field-collected samples of the cyanobacterium Phormidium animalis. Samples were initially cultured in solid ASN-10 medium, and a crude separation of major contaminants from P. animalis filaments was achieved by washing in a series of centrifugations and resuspensions in liquid medium. Then, manageable filament fragments were obtained by probe sonication. Fragmentation was followed by forceful washing, using vacuum-driven filtration through an 8-microm pore size membrane and an excess of water. Washed fragments were cultured and treated with a sequential exposure to four different antibiotics. Finally, axenic cultures were obtained from serial dilutions of treated fragments. Monitoring under microscope examination and by inoculation in Luria-Bertani (LB) agar plates indicated either axenicity or the degree of contamination throughout the strategy.
Non-Gaussian diffusion in static disordered media
NASA Astrophysics Data System (ADS)
Luo, Liang; Yi, Ming
2018-04-01
Non-Gaussian diffusion is commonly considered as a result of fluctuating diffusivity, which is correlated in time or in space or both. In this work, we investigate the non-Gaussian diffusion in static disordered media via a quenched trap model, where the diffusivity is spatially correlated. Several unique effects due to quenched disorder are reported. We analytically estimate the diffusion coefficient Ddis and its fluctuation over samples of finite size. We show a mechanism of population splitting in the non-Gaussian diffusion. It results in a sharp peak in the distribution of displacement P (x ,t ) around x =0 , that has frequently been observed in experiments. We examine the fidelity of the coarse-grained diffusion map, which is reconstructed from particle trajectories. Finally, we propose a procedure to estimate the correlation length in static disordered environments, where the information stored in the sample-to-sample fluctuation has been utilized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pombet, Denis; Desnoyers, Yvon; Charters, Grant
2013-07-01
The TruPro{sup R} process enables to collect a significant number of samples to characterize radiological materials. This innovative and alternative technique is experimented for the ANDRA quality-control inspection of cemented packages. It proves to be quicker and more prolific than the current methodology. Using classical statistics and geo-statistics approaches, the physical and radiological characteristics of two hulls containing immobilized wastes (sludges or concentrates) in a hydraulic binder are assessed in this paper. The waste homogeneity is also evaluated in comparison to ANDRA criterion. Sensibility to sample size (support effect), presence of extreme values, acceptable deviation rate and minimum number ofmore » data are discussed. The final objectives are to check the homogeneity of the two characterized radwaste packages and also to validate and reinforce this alternative characterization methodology. (authors)« less
Freeway travel speed calculation model based on ETC transaction data.
Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang
2014-01-01
Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers.
Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E
2012-03-01
In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Evaluation of Microvascularity by CD34 Expression in Esophagus and Oral Squamous Cell Carcinoma.
Shahsavari, Fatemeh; Farhadi, Sareh; Sadri, Donia; Sedehi, Marzieh
2015-06-01
The present study was scheduled to evaluate microvascularity by CD34 expression in esophagus and oral squamous cell carcinoma. This study was scheduled using 40 paraffin blocked samples including 20 of oral SCC and 20 of esophagus ones and Immunohistochemical staining was conducted using CD34 monoclonal antibody. Exact fisher test was used to evaluate frequency of expression between two studied groups. There was significant correlation between age and tumor size with CD34 expression in oral SCC samples (p < 0.05) and no significant correlation between sex and tumor differentiation level (grading) (p > 0.05). Also, there was no significant correlation between age, sex, tumor size and tumor differentiation level (grading) with CD34 expression in esophagus SCC samples (p > 0.05). There was no significant difference of CD34 expression frequency in oral and esophagus SCC (p = 0/583). Finally, CD34 expression was reported 'high' for major cases of esophagus and oral SCCs. It seems, other angiogenetic or nonangiogenetic factors except CD34 may play more important role and explain the different clinical behavior of SCC at recent different locations. Other factors would be considered along with CD34 expression to interpret different clinical behavior of SCC at recent different locations.
NASA Astrophysics Data System (ADS)
Godino, Neus; Jorde, Felix; Lawlor, Daryl; Jaeger, Magnus; Duschl, Claus
2015-08-01
Microalgae are a promising source of bioactive ingredients for the food, pharmaceutical and cosmetic industries. Every microalgae research group or production facility is facing one major problem regarding the potential contamination of the algal cell with bacteria. Prior to the storage of the microalgae in strain collections or to cultivation in bioreactors, it is necessary to carry out laborious purification procedures to separate the microalgae from the undesired bacterial cells. In this work, we present a disposable microfluidic cartridge for the high-throughput purification of microalgae samples based on inertial microfluidics. Some of the most relevant microalgae strains have a larger size than the relatively small, few micron bacterial cells, so making them distinguishable by size. The inertial microfluidic cartridge was fabricated with inexpensive materials, like pressure sensitive adhesive (PSA) and thin plastic layers, which were patterned using a simple cutting plotter. In spite of fabrication restrictions and the intrinsic difficulties of biological samples, the separation of microalgae from bacteria reached values in excess of 99%, previously only achieved using conventional high-end and high cost lithography methods. Moreover, due to the simple and high-throughput characteristic of the separation, it is possible to concatenate serial purification to exponentially decrease the absolute amount of bacteria in the final purified sample.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mostaed, A., E-mail: alimostaed@yahoo.com; Saghafian, H.; Mostaed, E.
2013-02-15
The effects of reinforcing particle type (SiC and TiC) on morphology and precipitation hardening behavior of Al–4.5%Cu based nanocomposites synthesized via mechanical milling were investigated in the current work. In order to study the microstructure and morphology of mechanically milled powder, X-ray diffraction technique, scanning electron microscopy and high resolution transmission electron microscopy were utilized. Results revealed that at the early stages of mechanical milling, when reinforcing particles are polycrystal, the alloying process is enhanced more in the case of using the TiC particles as reinforcement. But, at the final stages of mechanical milling, when reinforcing particles are single crystal,more » the alloying process is enhanced more in the case of using the SiC ones. Transmission electron microscopy results demonstrated that Al–4.5 wt.%Cu based nanocomposite powders were synthesized and confirmed that the mutual diffusion of aluminum and copper occurs through the interfacial plane of (200). The hardness results showed that not only does introducing 4 vol.% of reinforcing particles (SiC or TiC) considerably decrease the porosity of the bulk composite samples, but also it approximately doubles the hardness of Al–4.5 wt.%Cu alloy (53.4 HB). Finally, apart from TEM and scanning electron microscopy observation which are localized, a decline in hardness in the TiC and SiC contained samples, respectively, after 1.5 and 2 h aging time at 473 K proves the fact that the size of SiC particles is smaller than the size of the TiC ones. - Highlights: ► HRTEM results show mutual diffusion of Al and Cu occurs through the (200) planes. ► TiC particles enhance alloying process more than the SiC ones at the early stages of MM. ► SiC particles enhance alloying process more than the TiC ones at the final stages of MM.« less
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
NASA Astrophysics Data System (ADS)
Koster, N. B.; Molkenboer, F. T.; van Veldhoven, E.; Oostrom, S.
2011-04-01
We report on our findings on EUVL reticle contamination removal, inspection and repair. We show that carbon contamination can be removed without damage to the reticle by our plasma process. Also organic particles, simulated by PSL spheres, can be removed from both the surface of the absorber as well as from the bottom of the trenches. The particles shrink in size during the plasma treatment until they are vanished. The determination of the necessary cleaning time for PSL spheres was conducted on Ru coated samples and the final experiment was performed on our dummy reticle. Finally we show that the Helium Ion Microscope in combination with a Gas Injection System is capable of depositing additional lines and squares on the reticle with sufficient resolution for pattern repair.
Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael Keane; Xiao-Chun Shi; Tong-man Ong
The project staff partnered with Costas Sioutas from the University of Southern California to apply the VACES (Versatile Aerosol Concentration Enhancement System) to a diesel engine test facility at West Virginia University Department of Mechanical Engineering and later the NIOSH Lake Lynn Mine facility. The VACES system was able to allow diesel exhaust particulate matter (DPM) to grow to sufficient particle size to be efficiently collected with the SKC Biosampler impinger device, directly into a suspension of simulated pulmonary surfactant. At the WVU-MAE facility, the concentration of the aerosol was too high to allow efficient use of the VACES concentrationmore » enhancement, although aerosol collection was successful. Collection at the LLL was excellent with the diluted exhaust stream. In excess of 50 samples were collected at the LLL facility, along with matching filter samples, at multiple engine speed and load conditions. Replicate samples were combined and concentration increased using a centrifugal concentrator. Bioassays were negative for all tested samples, but this is believed to be due to insufficient concentration in the final assay suspensions.« less
NASA Astrophysics Data System (ADS)
Allen, Gregory Harold
Chemical speciation and source apportionment of size fractionated atmospheric aerosols were investigated using laser desorption time-of-flight mass spectrometry (LD TOF-MS) and source apportionment was carried out using carbon-14 accelerator mass spectrometry (14C AMS). Sample collection was carried out using the Davis Rotating-drum Unit for Monitoring impact analyzer in Davis, Colfax, and Yosemite, CA. Ambient atmospheric aerosols collected during the winter of 2010/11 and 2011/12 showed a significant difference in the types of compounds found in the small and large sized particles. The difference was due to the increase number of oxidized carbon species that were found in the small particles size ranges, but not in the large particles size ranges. Overall, the ambient atmospheric aerosols collected during the winter in Davis, CA had and average fraction modern of F14C = 0.753 +/- 0.006, indicating that the majority of the size fractionated particles originated from biogenic sources. Samples collected during the King Fire in Colfax, CA were used to determine the contribution of biomass burning (wildfire) aerosols. Factor analysis was used to reduce the ions found in the LD TOF-MS analysis of the King Fire samples. The final factor analysis generated a total of four factors that explained an overall 83% of the variance in the data set. Two of the factors correlated heavily with increased smoke events during the sample period. The increased smoke events produced a large number of highly oxidized organic aerosols (OOA2) and aromatic compounds that are indicative of biomass burning organic aerosols (WBOA). The signal intensities of the factors generated in the King Fire data were investigated in samples collected in Yosemite and Davis, CA to look at the impact of biomass burning on ambient atmospheric aerosols. In both comparison sample collections the OOA2 and WBOA factors both increased during biomass burning events located near the sampling sites. The correlation between the OOA2 and WBOA factors and smoke levels indicates that these factors can be used to identify the influence of biomass burning on ambient aerosols. The effectiveness of using the ChemWiki instead of a traditional textbook was investigated during the spring quarter of 2014. Student performance was measured using common midterms, a final, and a pre/post content exams. We also employed surveys, the Colorado Learning Attitudes about Science Survey (CLASS) for Chemistry, and a weekly time-on-task survey to quantify students' attitudes and study habits. The effectiveness of the ChemWiki compared to a traditional textbook was examined using multiple linear regression analysis with a standard non-inferiority testing framework. Results show that the performance of students in the section who were assigned readings from the ChemWiki was non-inferior to the performance of students in the section who were assigned readings from the traditional textbook, indicating that the ChemWiki does not substantially differ from the standard textbook in terms of student learning outcomes. The results from the surveys also suggest that the two classes were similar in their beliefs about chemistry and overall average time spent studying. These results indicate that the ChemWiki is a viable cost-saving alternative to traditional textbooks. The impact of using active learning techniques in a large lecture general chemistry class was investigated by assessing student performance and attitudes during the fall 2014 and winter 2015 quarters. One instructor applied active learning strategies while the remaining instructors employed more traditional lecture styles. Student performance, learning, learning environments, and attitudes were measured using a standardized pre/post exams, common final exams, classroom observations, and the CLASS chemistry instrument in large lecture general chemistry courses. Classroom observation data showed that the active learning class was the most student centered and of the other classes two instructors were transitional in their teaching style and the remaining two primarily employed traditional lecture techniques. The active learning class had the highest student performance but the difference was only statistically significant when compared to the two traditional lecture classes. Overall, our data showed a trend that student performance increased as the instructional style became more student centered. Student attitudes didn't seem to correlate with any specific instructional style and the students in the active learning class had similar attitudes to the other general students. The active learning class was successful in increasing the average time students spent studying outside of the class, a statistically significant difference of about 1.5 to 3.0 hrs/week.
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Accounting for twin births in sample size calculations for randomised trials.
Yelland, Lisa N; Sullivan, Thomas R; Collins, Carmel T; Price, David J; McPhee, Andrew J; Lee, Katherine J
2018-05-04
Including twins in randomised trials leads to non-independence or clustering in the data. Clustering has important implications for sample size calculations, yet few trials take this into account. Estimates of the intracluster correlation coefficient (ICC), or the correlation between outcomes of twins, are needed to assist with sample size planning. Our aims were to provide ICC estimates for infant outcomes, describe the information that must be specified in order to account for clustering due to twins in sample size calculations, and develop a simple tool for performing sample size calculations for trials including twins. ICCs were estimated for infant outcomes collected in four randomised trials that included twins. The information required to account for clustering due to twins in sample size calculations is described. A tool that calculates the sample size based on this information was developed in Microsoft Excel and in R as a Shiny web app. ICC estimates ranged between -0.12, indicating a weak negative relationship, and 0.98, indicating a strong positive relationship between outcomes of twins. Example calculations illustrate how the ICC estimates and sample size calculator can be used to determine the target sample size for trials including twins. Clustering among outcomes measured on twins should be taken into account in sample size calculations to obtain the desired power. Our ICC estimates and sample size calculator will be useful for designing future trials that include twins. Publication of additional ICCs is needed to further assist with sample size planning for future trials. © 2018 John Wiley & Sons Ltd.
Standen, PJ; Threapleton, K; Richardson, A; Connell, L; Brown, DJ; Battersby, S; Platts, F; Burton, A
2016-01-01
Objective: To assess the feasibility of conducting a randomised controlled trial of a home-based virtual reality system for rehabilitation of the arm following stroke. Design: Two group feasibility randomised controlled trial of intervention versus usual care. Setting: Patients’ homes. Participants: Patients aged 18 or over, with residual arm dysfunction following stroke and no longer receiving any other intensive rehabilitation. Interventions: Eight weeks’ use of a low cost home-based virtual reality system employing infra-red capture to translate the position of the hand into game play or usual care. Main measures: The primary objective was to collect information on the feasibility of a trial, including recruitment, collection of outcome measures and staff support required. Patients were assessed at three time points using the Wolf Motor Function Test, Nine-Hole Peg Test, Motor Activity Log and Nottingham Extended Activities of Daily Living. Results: Over 15 months only 47 people were referred to the team. Twenty seven were randomised and 18 (67%) of those completed final outcome measures. Sample size calculation based on data from the Wolf Motor Function Test indicated a requirement for 38 per group. There was a significantly greater change from baseline in the intervention group on midpoint Wolf Grip strength and two subscales of the final Motor Activity Log. Training in the use of the equipment took a median of 230 minutes per patient. Conclusions: To achieve the required sample size, a definitive home-based trial would require additional strategies to boost recruitment rates and adequate resources for patient support. PMID:27029939
Standen, P J; Threapleton, K; Richardson, A; Connell, L; Brown, D J; Battersby, S; Platts, F; Burton, A
2017-03-01
To assess the feasibility of conducting a randomised controlled trial of a home-based virtual reality system for rehabilitation of the arm following stroke. Two group feasibility randomised controlled trial of intervention versus usual care. Patients' homes. Patients aged 18 or over, with residual arm dysfunction following stroke and no longer receiving any other intensive rehabilitation. Eight weeks' use of a low cost home-based virtual reality system employing infra-red capture to translate the position of the hand into game play or usual care. The primary objective was to collect information on the feasibility of a trial, including recruitment, collection of outcome measures and staff support required. Patients were assessed at three time points using the Wolf Motor Function Test, Nine-Hole Peg Test, Motor Activity Log and Nottingham Extended Activities of Daily Living. Over 15 months only 47 people were referred to the team. Twenty seven were randomised and 18 (67%) of those completed final outcome measures. Sample size calculation based on data from the Wolf Motor Function Test indicated a requirement for 38 per group. There was a significantly greater change from baseline in the intervention group on midpoint Wolf Grip strength and two subscales of the final Motor Activity Log. Training in the use of the equipment took a median of 230 minutes per patient. To achieve the required sample size, a definitive home-based trial would require additional strategies to boost recruitment rates and adequate resources for patient support.
The Influence of Alumina Properties on its Dissolution in Smelting Electrolyte
NASA Astrophysics Data System (ADS)
Bagshaw, A. N.; Welch, B. J.
The dissolution of a wide range of commercially produced aluminas in modified cryolite bath was studied on a laboratory scale. Most of the aluminas were products of conventional refineries and smelter dry scrubbing systems; a few were produced in laboratory and pilot calciners, enabling greater flexibility in the calcination process and the final properties. The mode of alumina feeding and the size of addition approximated to the point feeder situation. Alpha-alumina content, B.E.T. surface area and median particle size had little impact on dissolution behaviour. The volatiles content, expressed as L.O.I., the morphology of the original hydrate and the mode of calcination had the most influence. Discrete intermediate oxide phases were identified in all samples; delta-alumina content impacted most on dissolution. The flow properties of an alumina affected its overall dissolution.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
Final Report of the Haystack Orbital Debris Data Review Panel
NASA Technical Reports Server (NTRS)
Barton, David K.; Brillinger, David; McDaniel, Patrick; Pollock, Kenneth H.; El-Shaarawi, A. H.; Tuley, Michael T.
1998-01-01
The Haystack Orbital Debris Data Review Panel was established in December 1996 to consider the adequacy of the data on orbital debris gathered over the past several years with the Haystack radar, and the accuracy of the methods used to estimate the flux vs. size relationship for this debris. The four specific issues addressed for the Panel were: 1. The number of observations relative to the estimated population of interest 2. The inherent ambiguity between the measured radar cross section (RCS) and the inferred physical size of the object 3. The inherent aspect angle limitation in viewing each object and its relationship to object geometry 4. The adequacy of the sample data set to characterize the debris population's potential geometry. Further discussion and interpretation of these issues, and identification of the detailed questions contributing to them, are discussed in this report.
Role of microstructure on twin nucleation and growth in HCP titanium: A statistical study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arul Kumar, M.; Wroński, M.; McCabe, Rodney James
In this study, a detailed statistical analysis is performed using Electron Back Scatter Diffraction (EBSD) to establish the effect of microstructure on twin nucleation and growth in deformed commercial purity hexagonal close packed (HCP) titanium. Rolled titanium samples are compressed along rolling, transverse and normal directions to establish statistical correlations for {10–12}, {11–21}, and {11–22} twins. A recently developed automated EBSD-twinning analysis software is employed for the statistical analysis. Finally, the analysis provides the following key findings: (I) grain size and strain dependence is different for twin nucleation and growth; (II) twinning statistics can be generalized for the HCP metalsmore » magnesium, zirconium and titanium; and (III) complex microstructure, where grain shape and size distribution is heterogeneous, requires multi-point statistical correlations.« less
Role of microstructure on twin nucleation and growth in HCP titanium: A statistical study
Arul Kumar, M.; Wroński, M.; McCabe, Rodney James; ...
2018-02-01
In this study, a detailed statistical analysis is performed using Electron Back Scatter Diffraction (EBSD) to establish the effect of microstructure on twin nucleation and growth in deformed commercial purity hexagonal close packed (HCP) titanium. Rolled titanium samples are compressed along rolling, transverse and normal directions to establish statistical correlations for {10–12}, {11–21}, and {11–22} twins. A recently developed automated EBSD-twinning analysis software is employed for the statistical analysis. Finally, the analysis provides the following key findings: (I) grain size and strain dependence is different for twin nucleation and growth; (II) twinning statistics can be generalized for the HCP metalsmore » magnesium, zirconium and titanium; and (III) complex microstructure, where grain shape and size distribution is heterogeneous, requires multi-point statistical correlations.« less
Cross Cultural Indicators of Independent Learning in Young Children: A Jordanian Case.
Almeqdad, Qais; Al-Hamouri, Firas; Zghoul, Rafe'a A; Al-Rousan, Ayoub; Whitebread, David
2016-06-10
This study attempts to explore the level of Independent Learning (IL) amongst a sample of Jordanian preschoolers. Behaviors of sixty preschool children aged 5-6 years old were observed and rated by their teachers against an Arabic version of the Children's Independent Learning Development (CHILD 3-5) observational instrument to explore the independent learning among young children according to their gender, engagement level, parental education and the size of their families. The results illustrated that preschoolers may show some aspects of behaviors particularly those related to pro-social and cognitive areas. It also indicated that children from high educated environments demonstrated IL behaviors more than those coming from low educated environments. Finally, children coming from larger family size showed less IL behaviors than those coming from smaller ones. Results and implications are discussed.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Public Opinion Polls, Chicken Soup and Sample Size
ERIC Educational Resources Information Center
Nguyen, Phung
2005-01-01
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
A Kolmogorov-Smirnov test for the molecular clock based on Bayesian ensembles of phylogenies
Antoneli, Fernando; Passos, Fernando M.; Lopes, Luciano R.
2018-01-01
Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides a gain of power. PMID:29300759
Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.
Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto
2007-07-01
To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
Reiki Therapy for Symptom Management in Children Receiving Palliative Care: A Pilot Study.
Thrane, Susan E; Maurer, Scott H; Ren, Dianxu; Danford, Cynthia A; Cohen, Susan M
2017-05-01
Pain may be reported in one-half to three-fourths of children with cancer and other terminal conditions and anxiety in about one-third of them. Pharmacologic methods do not always give satisfactory symptom relief. Complementary therapies such as Reiki may help children manage symptoms. This pre-post mixed-methods single group pilot study examined feasibility, acceptability, and the outcomes of pain, anxiety, and relaxation using Reiki therapy with children receiving palliative care. A convenience sample of children ages 7 to 16 and their parents were recruited from a palliative care service. Two 24-minute Reiki sessions were completed at the children's home. Paired t tests or Wilcoxon signed-rank tests were calculated to compare change from pre to post for outcome variables. Significance was set at P < .10. Cohen d effect sizes were calculated. The final sample included 8 verbal and 8 nonverbal children, 16 mothers, and 1 nurse. All mean scores for outcome variables decreased from pre- to posttreatment for both sessions. Significant decreases for pain for treatment 1 in nonverbal children ( P = .063) and for respiratory rate for treatment 2 in verbal children ( P = .009). Cohen d effect sizes were medium to large for most outcome measures. Decreased mean scores for outcome measures indicate that Reiki therapy did decrease pain, anxiety, heart, and respiratory rates, but small sample size deterred statistical significance. This preliminary work suggests that complementary methods of treatment such as Reiki may be beneficial to support traditional methods to manage pain and anxiety in children receiving palliative care.
Kimbal, Kyle C; Pahler, Leon; Larson, Rodney; VanDerslice, Jim
2012-01-01
Currently, there is no Mine Safety and Health Administration (MSHA)-approved sampling method that provides real-time results for ambient concentrations of diesel particulates. This study investigated whether a commercially available aerosol spectrometer, the Grimm Portable Aerosol Spectrometer Model 1.109, could be used during underground mine operations to provide accurate real-time diesel particulate data relative to MSHA-approved cassette-based sampling methods. A subset was to estimate size-specific diesel particle densities to potentially improve the diesel particulate concentration estimates using the aerosol monitor. Concurrent sampling was conducted during underground metal mine operations using six duplicate diesel particulate cassettes, according to the MSHA-approved method, and two identical Grimm Model 1.109 instruments. Linear regression was used to develop adjustment factors relating the Grimm results to the average of the cassette results. Statistical models using the Grimm data produced predicted diesel particulate concentrations that highly correlated with the time-weighted average cassette results (R(2) = 0.86, 0.88). Size-specific diesel particulate densities were not constant over the range of particle diameters observed. The variance of the calculated diesel particulate densities by particle diameter size supports the current understanding that diesel emissions are a mixture of particulate aerosols and a complex host of gases and vapors not limited to elemental and organic carbon. Finally, diesel particulate concentrations measured by the Grimm Model 1.109 can be adjusted to provide sufficiently accurate real-time air monitoring data for an underground mining environment.
The final size of a SARS epidemic model without quarantine
NASA Astrophysics Data System (ADS)
Hsu, Sze-Bi; Roeger, Lih-Ing W.
2007-09-01
In this article, we present the continuing work on a SARS model without quarantine by Hsu and Hsieh [Sze-Bi Hsu, Ying-Hen Hsieh, Modeling intervention measures and severity-dependent public response during severe acute respiratory syndrome outbreak, SIAM J. Appl. Math. 66 (2006) 627-647]. An "acting basic reproductive number" [psi] is used to predict the final size of the susceptible population. We find the relation among the final susceptible population size S[infinity], the initial susceptible population S0, and [psi]. If [psi]>1, the disease will prevail and the final size of the susceptible, S[infinity], becomes zero; therefore, everyone in the population will be infected eventually. If [psi]<1, the disease dies out, and then S[infinity]>0 which means part of the population will never be infected. Also, when S[infinity]>0, S[infinity] is increasing with respect to the initial susceptible population S0, and decreasing with respect to the acting basic reproductive number [psi].
Investigating the effect of Cd-Mn co-doped nano-sized BiFeO3 on its physical properties
NASA Astrophysics Data System (ADS)
Ishaq, B.; Murtaza, G.; Sharif, S.; Azhar Khan, M.; Akhtar, Naeem; Will, I. G.; Saleem, Murtaza; Ramay, Shahid M.
This work deals with the investigation of different effects on the structural, magnetic, electronic and dielectric properties of Cd and Mn doped Bi0.75Cd0.25Fe1-xMnxO3 multiferroic samples by taking fixed ratios of Cd and varying the Mn ratio with values of x = 0.0, 0.5, 0.10 and 0.15. Cd-Mn doped samples were synthesized chemically using a microemulsion method. All the samples were finally sintered at 700 °C for 2 h to obtain the single phase perovskites structure of BiFeO3 materials. The synthesized samples were characterized by different techniques, such as X-ray diffractometry (XRD), Scanning Electron Microscopy (SEM), Fourier transform infrared spectroscopy (FTIR), LCR meter and magnetic properties using VSM. XRD results confirm BFO is a perovskite structure having crystallite size in the range of 24-54 nm. XRD results also reveal observed structural distortion due to doping of Cd at the A-site and Mn at the B-site of BFO. SEM results depict that, as the substitution of Cd-Mn increases in BFO, grain size decreases up to 30 nm. FTIR spectra showed prominent absorption bands at 555 cm-1 and 445 cm-1 corresponding to the stretching vibrations of the metal ions complexes at site A and site B, respectively. Variation of dielectric constant (ɛ‧) and loss tangent (tan δ) at room temperature in the range of 1 MHz to 3 GHz have been investigated. Results reveal that with Cd-Mn co doping a slight decrease in dielectric constant have been observed. Magnetic properties of Cd-Mn doped pure BFO samples have been studied at 300 K. Results reveal that undoped BiFeO3 exhibits weak ferromagnetic ordering due to the canting of its spin. Increase in magnetization and decrease in coercivity is a clear indication that a material can be used in high density recording media and memory devices.
Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max
2014-01-01
Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. PMID:24465377
NASA Astrophysics Data System (ADS)
Lin, Y.-C.; Tsai, C.-J.; Wu, Y.-C.; Zhang, R.; Chi, K.-H.; Huang, Y.-T.; Lin, S.-H.; Hsu, S.-C.
2014-05-01
Traffic emissions are a significant source of airborne particulate matter (PM) in ambient environments. These emissions contain high abundance of toxic metals and thus pose adverse effects on human health. Size-fractionated aerosol samples were collected from May to September 2013 by using micro-orifice uniform deposited impactor (MOUDI). Sample collection was conducted simultaneously at the inlet and outlet sites of Hsuehshan Tunnel in northern Taiwan, which is the second longest freeway tunnel (12.9 km) in Asia. Such endeavor aims to characterize the chemical constituents, size distributions, and fingerprinting ratios, as well as the emission factors of particulate metals emitted by vehicle fleets. A total of 36 metals in size-resolved aerosols were determined through inductively coupled plasma mass spectrometry. Three major groups, namely, tailpipe emissions (Zn, Pb, and V), wear debris (Cu, Cd, Fe, Ga, Mn, Mo, Sb, and Sn), and resuspended dust (Ca, Mg, K, and Rb), of airborne PM metals were categorized on the basis of the results of enrichment factor, correlation matrix, and principal component analysis. Size distributions of wear-originated metals resembled the pattern of crustal elements, which were predominated by super-micron particulates (PM1-10). By contrast, tailpipe exhaust elements such as Zn, Pb, and V were distributed mainly in submicron particles. By employing Cu as a tracer of wear abrasion, several inter-metal ratios, including Fe/Cu (14), Ba/Cu (1.05), Sb/Cu (0.16), Sn/Cu (0.10), and Ga/Cu (0.03), served as fingerprints for wear debris. Emission factor of PM10 mass was estimated to be 7.7 mg vkm-1. The metal emissions were mostly predominated in super-micron particles (PM1-10). Finally, factors that possibly affect particulate metal emissions inside Hsuehshan Tunnel are discussed.
Novel tretinoin formulations: a drug-in-cyclodextrin-in-liposome approach.
Ascenso, Andreia; Cruz, Mariana; Euletério, Carla; Carvalho, Filomena A; Santos, Nuno C; Marques, Helena C; Simões, Sandra
2013-09-01
The aims of this experimental work were the incorporation and full characterization of the system Tretinoin-in-dimethyl-beta-cyclodextrin-in-ultradeformable vesicles (Tretinoin-CyD-UDV) and Tretinoin-in-ultradeformable vesicles (Tretinoin-UDV). The Tretinoin-CyD complex was prepared by kneading and the UDV by adding soybean phosphatidylcholine (SPC) to Tween® 80 followed by an appropriate volume of sodium phosphate buffer solution to make a 10%-20% lipid suspension. The resulting suspension was brought to the final mean vesicles size, of approximately 150 nm, by sequential filtration. The physicochemical characterization was based on: the evaluation of mean particle size and polydispersity index (PI) measured by photon correlation spectroscopy (PCS) and atomic force microscopy (AFM) topographic imaging; zeta potential (ζ-potential) and the SPC concentration determined by Laser-Doppler anemometry and an enzymatic-colorimetric test, respectively. The quantification of the incorporated Tretinoin and its chemical stability (during preparation and storage) was assayed by a HPLC at 342 nm. It was possible to obtain the system Tretinoin-CyD-UDV. The mean vesicle size was the most stable parameter during experiments time course. AFM showed that Tretinoin-CyD-UDV samples were very heterogeneous in size, having three distinct subpopulations, while Tretinoin-UDV samples had only one homogeneous size population. The results of the ζ-potential measurements have shown that vesicle surface charge was low, as expected, presenting negative values. The incorporation efficiency was high, and no significant differences between Tretinoin-CyD-UDV and Tretinoin-UDV were observed. However, only Tretinoin-UDV with 20% lipid concentration formulation remained chemically stable during the evaluation period. According to our results, Tretinoin-UDV with 20% lipid concentration seems to be a better approach than Tretinoin-CyD-UDV, attending to the higher chemical stability.
Bright, Philip; Hambly, Karen
2017-12-21
E-health software tools have been deployed in managing knee conditions. Reporting of patient and practitioner satisfaction in studies regarding e-health usage is not widely explored. The objective of this review was to identify studies describing patient and practitioner satisfaction with software use concerning knee pain. A computerized search was undertaken: four electronic databases were searched from January 2007 until January 2017. Key words were decision dashboard, clinical decision, Web-based resource, evidence support, and knee. Full texts were scanned for effect of size reporting and satisfaction scales from participants and practitioners. Binary regression was run; impact factor and sample size were predictors with indicators for satisfaction and effect size reporting as dependent variables. Seventy-seven articles were retrieved; 37 studies were included in final analysis. Ten studies reported patient satisfaction ratings (27.8%): a single study reported both patient and practitioner satisfaction (2.8%). Randomized control trials were the most common design (35%) and knee osteoarthritis the most prevalent condition (38%). Electronic patient-reported outcome measures and Web-based training were the most common interventions. No significant dependency was found within the regression models (p > 0.05). The proportion of reporting of patient satisfaction was low; practitioner satisfaction was poorly represented. There may be implications for the suitability of administering e-health, a medium for capturing further meta-evidence needs to be established and used as best practice for implicated studies in future. This is the first review of its kind to address patient and practitioner satisfaction with knee e-health.
A comparative analysis of adult body size and its correlates in acanthocephalan parasites.
Poulin, Robert; Wise, Megan; Moore, Janice
2003-07-30
Adult acanthocephalan body sizes vary interspecifically over more than two orders of magnitude; yet, despite its importance for our understanding of the coevolutionary links between hosts and parasites, this variation remains unexplained. Here, we used a comparative analysis to investigate how final adult sizes and relative increments in size following establishment in the definitive host are influenced by three potential determinants of acanthocephalan sizes: initial (cystacanth) size at infection, host body mass, and the thermal regime experienced during growth, i.e. whether the definitive host is an ectotherm or an endotherm. Relative growth from the cystacanth stage to the adult stage ranged from twofold to more than 10,000-fold across acanthocephalan species, averaging just over 100-fold. However, this relative increment in size did not correlate with host mass, and did not differ between acanthocephalan species using ectothermic hosts and those growing in endothermic hosts. In contrast, final acanthocephalan adult sizes correlated positively with host mass, and after correction for host mass, final adult sizes were higher in species parasitising endotherms than in those found in ectotherms. The relationship between host mass and acanthocephalan adult size practically disappears, however, once phylogenetic influences are taken into account. Positive relationships between adult acanthocephalan size, cystacanth size and egg size indicate that a given relative size is a feature of an acanthocephalan species at all stages of its life cycle. These relationships also suggest that adult size is to some extent determined by cystacanth size, and that the characteristics of the definitive host are not the sole determinants of parasite life history traits.
Muselík, Jan; Franc, Aleš; Doležel, Petr; Goněc, Roman; Krondlová, Anna; Lukášová, Ivana
2014-09-01
The article describes the development and production of tablets using direct compression of powder mixtures. The aim was to describe the impact of filler particle size and the time of lubricant addition during mixing on content uniformity according to the Good Manufacturing Practice (GMP) process validation requirements. Processes are regulated by complex directives, forcing the producers to validate, using sophisticated methods, the content uniformity of intermediates as well as final products. Cutting down of production time and material, shortening of analyses, and fast and reliable statistic evaluation of results can reduce the final price without affecting product quality. The manufacturing process of directly compressed tablets containing the low dose active pharmaceutical ingredient (API) warfarin, with content uniformity passing validation criteria, is used as a model example. Statistic methods have proved that the manufacturing process is reproducible. Methods suitable for elucidation of various properties of the final blend, e.g., measurement of electrostatic charge by Faraday pail and evaluation of mutual influences of researched variables by partial least square (PLS) regression, were used. Using these methods, it was proved that the filler with higher particle size increased the content uniformity of both blends and the ensuing tablets. Addition of the lubricant, magnesium stearate, during the blending process improved the content uniformity of blends containing the filler with larger particles. This seems to be caused by reduced sampling error due to the suppression of electrostatic charge.
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
Corrosion Behavior of Additive Manufactured Ti-6Al-4V Alloy in NaCl Solution
NASA Astrophysics Data System (ADS)
Yang, Jingjing; Yang, Huihui; Yu, Hanchen; Wang, Zemin; Zeng, Xiaoyan
2017-07-01
The microstructures, potentiodynamic curves, and electrochemical impedance spectroscopy are characterized for Ti-6Al-4V samples produced by selective laser melting (SLM), SLM followed by heat treatment (HT), wire and arc additive manufacturing (WAAM), and traditional rolling to investigate their corrosion behaviors. Results show that the processing technology acts a significant role in controlling the microstructures, which in turn directly determine their corrosion resistance. The order of corrosion resistance of these samples is SLM < WAAM < rolling < SLM+HT. Among these microstructural factors for influencing corrosion resistance, type of constituent phase is the main one, followed by grain size, and the last is morphology. Finally, the application potentials of additive manufactured Ti-6Al-4V alloy are verified in the aspect of corrosion resistance.
The propagation of light through fibre reinforced composites
NASA Astrophysics Data System (ADS)
Sargent, J. P.; Upstill, C.
1986-06-01
Features of a generalized technique for detecting and measuring submicron gaps between the fiber and the matrix in low fiber-volume fraction composite materials are outlined. Sample microphotographs are provided to illustrate visual evidence of the presence of water and air pockets at the fiber-matrix interface, and the differences in refractive index of composite material components and impurities such as oils. The imagery were obtained using a laser to illumine glass fiber reinforced epoxy samples. Attention is given to the geometric optics, evanescent wave optics and polarization effects associated with interfacial gaps. Finally, the scattering of light by the gaps and the corresponding size of the gaps are described statistically in terms of Rayleigh's theory, noting that only estimates will be possible for the scattering due to limitations of available computing power.
Study of Evaporation Rate of Water in Hydrophobic Confinement using Forward Flux Sampling
NASA Astrophysics Data System (ADS)
Sharma, Sumit; Debenedetti, Pablo G.
2012-02-01
Drying of hydrophobic cavities is of interest in understanding biological self assembly, protein stability and opening and closing of ion channels. Liquid-to-vapor transition of water in confinement is associated with large kinetic barriers which preclude its study using conventional simulation techniques. Using forward flux sampling to study the kinetics of the transition between two hydrophobic surfaces, we show that a) the free energy barriers to evaporation scale linearly with the distance between the two surfaces, d; b) the evaporation rates increase as the lateral size of the surfaces, L increases, and c) the transition state to evaporation for sufficiently large L is a cylindrical vapor cavity connecting the two hydrophobic surfaces. Finally, we decouple the effects of confinement geometry and surface chemistry on the evaporation rates.
Potential emerging treatment in vitiligo using Er:YAG in combination with 5FU and clobetasol.
Mokhtari, Fatemeh; Bostakian, Anis; Shahmoradi, Zabihollah; Jafari-Koshki, Tohid; Iraji, Fariba; Faghihi, Gita; Hosseini, Sayed Mohsen; Bafandeh, Behzad
2018-04-01
Vitiligo is a pigmentary disorder of skin affecting at least 1% of the world population of all races in both sexes. Its importance is mainly due to subsequent social and psychological problems rather than clinical complications. Various treatment choices are available for vitiligo; however, laser-based courses have shown to give more acceptable results. The aim of this trial was to evaluate the efficacy of Er:YAG laser as a supplementary medicine to topical 5FU and clobetasol in vitiligo patients. Two comparable vitiligo patches from 38 eligible patients were randomized to receive topical 5FU and clobetasol in control group and additional Er:YAG laser in intervention group. Major outcomes of interest were the size of patch and pigmentation score at randomization and 2 and 4 months after therapy. Final sample included 18 (47%) male patients and age of 35.66±8.04. The performance Er:YAG group was superior in all sites. Reduction in the size of patches was greater in Er:YAG group (p-value=.004). Also, this group showed a higher pigmentation scores in the trial period than control group (p-value<.001). Greater reduction in the size and increase in pigmentation score was seen in Er:YAG group especially for short periods after therapy and repeating laser sessions may help improving final outcomes. Er:AYG could help in reducing complications of long-term topical treatments, achieving faster response, and improving patient adherence. © 2017 Wiley Periodicals, Inc.
Analysis on the grinding quality of palm oil fibers by using combined grinding equipment
NASA Astrophysics Data System (ADS)
Gan, H. L.; Gan, L. M.; Law, H. C.
2015-12-01
As known, Malaysia is the second largest palm oil producer worldwide after Indonesia, therefore indicating the abundance of its wastes within the country. The plantation would be seen to increase to at least 5.2 million ha by 2020, and the waste generation would be 50-70 times the plantation. However, the efficiency of bulk density is reduced. This is one of the main reasons of the initiation of this size reduction/ grinding research. With appropriate parameters, grinding will be seen to be helping in enhancing the inter-particle bindings, subsequently increasing the quality of final products. This paper focuses on the grinding quality involving palm oil wastes by using the Scanning Electron Microscope (SEM). The samples would first be ground to powder at varying grinding speed and finally got the randomly chosen particles measured to obtain the size range. The grinding speed was manipulated from 15 Hz to 40 Hz. From the data obtained, it was found the particles fineness increased with increasing grinding speed. In general, the size ranged from 45 μm to about 600 μm, where the finest was recorded at the speed of 40 Hz. It was also found that the binding was not so encouraging at very low speeds. Therefore, the optimum grinding speed for oil palm residues lied in the range of 25 Hz to 30 Hz. However, there were still limitations to be overcome if the accuracy of the image clarity is to be enhanced.
Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review.
McKeown, Andrew; Gewandter, Jennifer S; McDermott, Michael P; Pawlowski, Joseph R; Poli, Joseph J; Rothstein, Daniel; Farrar, John T; Gilron, Ian; Katz, Nathaniel P; Lin, Allison H; Rappaport, Bob A; Rowbotham, Michael C; Turk, Dennis C; Dworkin, Robert H; Smith, Shannon M
2015-03-01
Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. Copyright © 2015 American Pain Society. All rights reserved.
Correlational effect size benchmarks.
Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A
2015-03-01
Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-20
...The U.S. Small Business Administration (SBA) is increasing small business size standards for 36 industries in North American Industry Classification System (NAICS) Sector 52, Finance and Insurance, and for two industries in NAICS Sector 55, Management of Companies and Enterprises. In addition, SBA is changing the basis for measuring size from assets to annual receipts for one industry in NAICS Sector 52, namely, NAICS 522293, International Trade Financing. Finally, SBA is deleting NAICS 525930, Real Estate Investment Trusts, from its table of size standards. The U.S. Office of Management and Budget (OMB) included the financial activities formerly included in NAICS 525930 in NAICS 531110, NAICS 531120, NAICS 531130, NAICS 531190, and NAICS 525990. As part of its ongoing comprehensive size standards review, SBA evaluated all receipts based and assets based size standards in NAICS Sectors 52 and 55 to determine whether they should be retained or revised. SBA did not review the 1,500-employee size standard for NAICS 524126, Direct Property and Casualty Insurance Carriers, which it will review in the near future with other employee based size standards. This final rule is one of a series of final rules that will review size standards of industries grouped by NAICS Sectors.
Barry, Adam E; Szucs, Leigh E; Reyes, Jovanni V; Ji, Qian; Wilson, Kelly L; Thompson, Bruce
2016-10-01
Given the American Psychological Association's strong recommendation to always report effect sizes in research, scholars have a responsibility to provide complete information regarding their findings. The purposes of this study were to (a) determine the frequencies with which different effect sizes were reported in published, peer-reviewed articles in health education, promotion, and behavior journals and (b) discuss implications for reporting effect size in social science research. Across a 4-year time period (2010-2013), 1,950 peer-reviewed published articles were examined from the following six health education and behavior journals: American Journal of Health Behavior, American Journal of Health Promotion, Health Education & Behavior, Health Education Research, Journal of American College Health, and Journal of School Health Quantitative features from eligible manuscripts were documented using Qualtrics online survey software. Of the 1,245 articles in the final sample that reported quantitative data analyses, approximately 47.9% (n = 597) of the articles reported an effect size. While 16 unique types of effect size were reported across all included journals, many of the effect sizes were reported with little frequency across most journals. Overall, odds ratio/adjusted odds ratio (n = 340, 50.1%), Pearson r/r(2) (n = 162, 23.8%), and eta squared/partial eta squared (n = 46, 7.2%) accounted for the most frequently used effect size. Quality research practice requires both testing statistical significance and reporting effect size. However, our study shows that a substantial portion of published literature in health education and behavior lacks consistent reporting of effect size. © 2016 Society for Public Health Education.
Global Particle Size Distributions: Measurements during the Atmospheric Tomography (ATom) Project
NASA Astrophysics Data System (ADS)
Brock, C. A.; Williamson, C.; Kupc, A.; Froyd, K. D.; Richardson, M.; Weinzierl, B.; Dollner, M.; Schuh, H.; Erdesz, F.
2016-12-01
The Atmospheric Tomography (ATom) project is a three-year NASA-sponsored program to map the spatial and temporal distribution of greenhouse gases, reactive species, and aerosol particles from the Arctic to the Antarctic. In situ measurements are being made on the NASA DC-8 research aircraft, which will make four global circumnavigations of the Earth over the mid-Pacific and mid-Atlantic Oceans while continuously profiling between 0.2 and 13 km altitude. In situ microphysical measurements will provide an unique and unprecedented dataset of aerosol particle size distributions between 0.004 and 50 µm diameter. This unbiased, representative dataset allows investigation of new particle formation in the remote troposphere, placing strong observational constraints on the chemical and physical mechanisms that govern particle formation and growth to cloud-active sizes. Particles from 0.004 to 0.055 µm are measured with 10 condensation particle counters. Particles with diameters from 0.06 to 1.0 µm are measured with one-second resolution using two ultra-high sensitivity aerosol size spectrometers (UHSASes). A laser aerosol spectrometer (LAS) measures particle size distributions between 0.12 and 10 µm in diameter. Finally, a cloud, aerosol and precipitation spectrometer (CAPS) underwing optical spectrometer probe sizes ambient particles with diameters from 0.5 to 50 µm and images and sizes precipitation-sized particles. Additional particle instruments on the payload include a high-resolution time-of-flight aerosol mass spectrometer and a single particle laser-ablation aerosol mass spectrometer. The instruments are calibrated in the laboratory and on the aircraft. Calibrations are checked in flight by introducing four sizes of polystyrene latex (PSL) microspheres into the sampling inlet. The CAPS probe is calibrated using PSL and glass microspheres that are aspirated into the sample volume. Comparisons between the instruments and checks with the calibration aerosol indicate flight performance within uncertainties expected from laboratory calibrations. Analysis of data from the first ATom circuit in August 2016 shows high concentrations of newly formed particles in the tropical middle and upper troposphere and Arctic lower troposphere.
Aslan, O; Hamill, R M; Sweeney, T; Reardon, W; Mullen, A M
2009-01-01
It is essential to isolate high-quality DNA from muscle tissue for PCR-based applications in traceability of animal origin. We wished to examine the impact of cooking meat to a range of core temperatures on the quality and quantity of subsequently isolated genomic (specifically, nuclear) DNA. Triplicate steak samples were cooked in a water bath (100 degrees C) until their final internal temperature was 75, 80, 85, 90, 95, or 100 degrees C, and DNA was extracted. Deoxyribonucleic acid quantity was significantly reduced in cooked meat samples compared with raw (6.5 vs. 56.6 ng/microL; P < 0.001), but there was no relationship with cooking temperature. Quality (A(260)/A(280), i.e., absorbance at 260 and 280 nm) was also affected by cooking (P < 0.001). For all 3 genes, large PCR amplicons (product size >800 bp) were observed only when using DNA from raw meat and steak cooked to lower core temperatures. Small amplicons (<200 bp) were present for all core temperatures. Cooking meat to high temperatures thus resulted in a reduced overall yield and probable fragmentation of DNA to sizes less than 800 bp. Although nuclear DNA is preferable to mitochondrial DNA for food authentication, it is less abundant, and results suggest that analyses should be designed to use small amplicon sizes for meat cooked to high core temperatures.
Preparation of polyamide nanocapsules of Aloe vera L. delivery with in vivo studies.
Esmaeili, Akbar; Ebrahimzadeh, Maryam
2015-04-01
Aloe vera is the oldest medicinal plant ever known and the most applied medicinal plant worldwide. The purpose of this study was to prepare polyamide nanocapsules containing A. vera L. by an emulsion diffusion technique with in vivo studies. Diethyletriamine (DETA) was used as the encapsulating polymer with acetone ethyl acetate and dimethyl sulfoxide (DMSO) as the organic solvents and Tween and gelatin in water as the stabilizers. Sebacoyl chloride (SC) monomer, A. vera L. extract, and olive oil were mixed with the acetone and then water containing DETA monomer was added to the solution using a magnetic stirrer. Finally, the acetone was removed under vacuum, and nanocapsules were obtained using a freeze drier. This study showed that the size of the nanocapsule depends on a variety of factors such as the ratio of polymer to oil, the concentration of polymers, and the plant extract. The first sample is without surfactant and the size of nanocapsules in the sample is 115 nm. By adding surfactant, nanocapsules size was reduced to 96 nm. Nanocapsules containing A. vera were administered to rats and the effects were compared with a normal control group. The results showed that in the A. vera group, the effect is higher. The nanocapsules were identified by scanning electron microscopy (SEM), zeta potential sizer (ZPS), and Fourier-transform infrared spectroscopy (FT-IR).
Sample Preparation for Electron Probe Microanalysis—Pushing the Limits
Geller, Joseph D.; Engle, Paul D.
2002-01-01
There are two fundamental considerations in preparing samples for electron probe microanalysis (EPMA). The first one may seem obvious, but we often find it is overlooked. That is, the sample analyzed should be representative of the population from which it comes. The second is a direct result of the assumptions in the calculations used to convert x-ray intensity ratios, between the sample and standard, to concentrations. Samples originate from a wide range of sources. During their journey to being excited under the electron beam for the production of x rays there are many possibilities for sample alteration. Handling can contaminate samples by adding extraneous matter. In preparation, the various abrasives used in sizing the sample by sawing, grinding and polishing can embed themselves. The most accurate composition of a contaminated sample is, at best, not representative of the original sample; it is misleading. Our laboratory performs EPMA analysis on customer submitted samples and prepares over 250 different calibration standards including pure elements, compounds, alloys, glasses and minerals. This large variety of samples does not lend itself to mass production techniques, including automatic polishing. Our manual preparation techniques are designed individually for each sample. The use of automated preparation equipment does not lend itself to this environment, and is not included in this manuscript. The final step in quantitative electron probe microanalysis is the conversion of x-ray intensities ratios, known as the “k-ratios,” to composition (in mass fraction or atomic percent) and/or film thickness. Of the many assumptions made in the ZAF (where these letters stand for atomic number, absorption and fluorescence) corrections the localized geometry between the sample and electron beam, or takeoff angle, must be accurately known. Small angular errors can lead to significant errors in the final results. The sample preparation technique then becomes very important, and, under certain conditions, may even be the limiting factor in the analytical uncertainty budget. This paper considers preparing samples to get known geometries. It will not address the analysis of samples with irregular, unprepared surfaces or unknown geometries. PMID:27446757
NASA Astrophysics Data System (ADS)
Sivakumar, S.; Soundhirarajan, P.; Venkatesan, A.; Khatiwada, Chandra Prasad
2015-02-01
In the present study, we reported that the synthesis and characterization of pure and diverse mole Co-doped BaSO4 nanoparticles have been synthesized by chemical precipitation technique. X-ray diffraction analysis (XRD) brought out the information about the synthesized products is orthorhombic structure and highly crystalline in nature. The average grain size of the samples was determined by using the Debye-Scherer's equation. The existence of functional groups and band area of the samples were confirmed by Fourier transform infrared (FTIR) spectroscopy. The direct and indirect band gap energy of pure and doped samples was carried out using UV-VIS-DRS. The surface micrograph, morphological distribution and elemental compositions of the synthesized products were assessed by scanning electron microscopy (SEM) and Energy dispersive X-ray (EDS). Thermo gravimetric and differential thermal analysis (TG-DTA) techniques were analyzed thermal behaviour of pure and Co-doped samples. Finally, antibacterial activities found the Gram-positive and Gram-negative bacteria are more active in transporter, dehydrogenize and periplasmic enzymatic activities of pure and doped samples.
Hermannsdörfer, Justus; de Jonge, Niels
2017-02-05
Samples fully embedded in liquid can be studied at a nanoscale spatial resolution with Scanning Transmission Electron Microscopy (STEM) using a microfluidic chamber assembled in the specimen holder for Transmission Electron Microscopy (TEM) and STEM. The microfluidic system consists of two silicon microchips supporting thin Silicon Nitride (SiN) membrane windows. This article describes the basic steps of sample loading and data acquisition. Most important of all is to ensure that the liquid compartment is correctly assembled, thus providing a thin liquid layer and a vacuum seal. This protocol also includes a number of tests necessary to perform during sample loading in order to ensure correct assembly. Once the sample is loaded in the electron microscope, the liquid thickness needs to be measured. Incorrect assembly may result in a too-thick liquid, while a too-thin liquid may indicate the absence of liquid, such as when a bubble is formed. Finally, the protocol explains how images are taken and how dynamic processes can be studied. A sample containing AuNPs is imaged both in pure water and in saline.
Hermannsdörfer, Justus; de Jonge, Niels
2017-01-01
Samples fully embedded in liquid can be studied at a nanoscale spatial resolution with Scanning Transmission Electron Microscopy (STEM) using a microfluidic chamber assembled in the specimen holder for Transmission Electron Microscopy (TEM) and STEM. The microfluidic system consists of two silicon microchips supporting thin Silicon Nitride (SiN) membrane windows. This article describes the basic steps of sample loading and data acquisition. Most important of all is to ensure that the liquid compartment is correctly assembled, thus providing a thin liquid layer and a vacuum seal. This protocol also includes a number of tests necessary to perform during sample loading in order to ensure correct assembly. Once the sample is loaded in the electron microscope, the liquid thickness needs to be measured. Incorrect assembly may result in a too-thick liquid, while a too-thin liquid may indicate the absence of liquid, such as when a bubble is formed. Finally, the protocol explains how images are taken and how dynamic processes can be studied. A sample containing AuNPs is imaged both in pure water and in saline. PMID:28190028
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
Rodriguez-Lazaro, David; Gonzalez-García, Patricia; Delibato, Elisabetta; De Medici, Dario; García-Gimeno, Rosa Maria; Valero, Antonio; Hernandez, Marta
2014-08-01
The microbiological standard for detection of Salmonella relies on several cultural steps and requires more than 5 days for final confirmation, and as consequence there is a need for an alternative rapid methodology for its detection. The aim of this study was to compare different detection strategies based on real-time PCR for a rapid and sensitive detection in an ample range of food products: raw pork and poultry meat, ready to eat lettuce salad and raw sheep milk cured cheese. Three main parameters were evaluated to reduce the time and cost for final results: the initial sample size (25 and 50 g), the incubation times (6, 10 and 18 h) and the bacterial DNA extraction (simple boiling of the culture after washing the bacterial pellet, the use of the Chelex resin, and a commercial silica column). The results obtained demonstrate that a combination of an incubation in buffered peptone water for 18 h of a 25 g-sample coupled to a DNA extraction by boiling and a real-time PCR assay detected down to 2-4 Salmonella spp.CFU per sample in less than 21 h in different types of food products. This RTi-PCR-based method is fully compatible with the ISO standard, providing results more rapidly and cost-effectively. The results were confirmed in a large number of naturally contaminated food samples with at least the same analytical performance as the reference method. Copyright © 2014 Elsevier B.V. All rights reserved.
Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M
2018-06-01
Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.
The interrupted power law and the size of shadow banking.
Fiaschi, Davide; Kondor, Imre; Marsili, Matteo; Volpati, Valerio
2014-01-01
Using public data (Forbes Global 2000) we show that the asset sizes for the largest global firms follow a Pareto distribution in an intermediate range, that is "interrupted" by a sharp cut-off in its upper tail, where it is totally dominated by financial firms. This flattening of the distribution contrasts with a large body of empirical literature which finds a Pareto distribution for firm sizes both across countries and over time. Pareto distributions are generally traced back to a mechanism of proportional random growth, based on a regime of constant returns to scale. This makes our findings of an "interrupted" Pareto distribution all the more puzzling, because we provide evidence that financial firms in our sample should operate in such a regime. We claim that the missing mass from the upper tail of the asset size distribution is a consequence of shadow banking activity and that it provides an (upper) estimate of the size of the shadow banking system. This estimate-which we propose as a shadow banking index-compares well with estimates of the Financial Stability Board until 2009, but it shows a sharper rise in shadow banking activity after 2010. Finally, we propose a proportional random growth model that reproduces the observed distribution, thereby providing a quantitative estimate of the intensity of shadow banking activity.
Ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction
Friedman, Matt
2009-01-01
Despite the attention focused on mass extinction events in the fossil record, patterns of extinction in the dominant group of marine vertebrates—fishes—remain largely unexplored. Here, I demonstrate ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction, based on a genus-level dataset that accounts for lineages predicted on the basis of phylogeny but not yet sampled in the fossil record. Two ecologically relevant anatomical features are considered: body size and jaw-closing lever ratio. Extinction intensity is higher for taxa with large body sizes and jaws consistent with speed (rather than force) transmission; resampling tests indicate that victims represent a nonrandom subset of taxa present in the final stage of the Cretaceous. Logistic regressions of the raw data reveal that this nonrandom distribution stems primarily from the larger body sizes of victims relative to survivors. Jaw mechanics are also a significant factor for most dataset partitions but are always less important than body size. When data are corrected for phylogenetic nonindependence, jaw mechanics show a significant correlation with extinction risk, but body size does not. Many modern large-bodied, predatory taxa currently suffering from overexploitation, such billfishes and tunas, first occur in the Paleocene, when they appear to have filled the functional space vacated by some extinction victims. PMID:19276106
Ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction.
Friedman, Matt
2009-03-31
Despite the attention focused on mass extinction events in the fossil record, patterns of extinction in the dominant group of marine vertebrates-fishes-remain largely unexplored. Here, I demonstrate ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction, based on a genus-level dataset that accounts for lineages predicted on the basis of phylogeny but not yet sampled in the fossil record. Two ecologically relevant anatomical features are considered: body size and jaw-closing lever ratio. Extinction intensity is higher for taxa with large body sizes and jaws consistent with speed (rather than force) transmission; resampling tests indicate that victims represent a nonrandom subset of taxa present in the final stage of the Cretaceous. Logistic regressions of the raw data reveal that this nonrandom distribution stems primarily from the larger body sizes of victims relative to survivors. Jaw mechanics are also a significant factor for most dataset partitions but are always less important than body size. When data are corrected for phylogenetic nonindependence, jaw mechanics show a significant correlation with extinction risk, but body size does not. Many modern large-bodied, predatory taxa currently suffering from overexploitation, such billfishes and tunas, first occur in the Paleocene, when they appear to have filled the functional space vacated by some extinction victims.
77 FR 53769 - Receipts-Based, Small Business Size Standard; Confirmation of Effective Date
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-04
...-Based, Small Business Size Standard; Confirmation of Effective Date AGENCY: Nuclear Regulatory Commission. ACTION: Direct final rule; confirmation of effective date. SUMMARY: The U.S. Nuclear Regulatory Commission (NRC) is confirming the effective date of August 22, 2012, for the direct final rule that appeared...
Chen, Feng; Suzuki, Yasuhiro; Nagai, Nobuo; Peeters, Ronald; Marchal, Guy; Ni, Yicheng
2005-01-30
The purpose of the present animal experiment was to determine whether source images from dynamic susceptibility contrast-enhanced perfusion weighted imaging (DSC-PWI) at a 1.5T MR scanner, performed early after photochemically induced thrombosis (PIT) of cerebral middle artery (MCA), is feasible to predict final cerebral infarct size in a rat stroke model. Fifteen rats were subjected to PIT of proximal MCA. T2 weighted imaging (T2WI), diffusion-weighted imaging (DWI), and contrast-enhanced PWI were obtained at 1 h and 24 h after MCA occlusion. The relative lesion size (RLS) was defined as lesion volume/brain volume x 100% and measured for MR images, and compared with the final RLS on the gold standard triphenyl tetrazolium chloride (TTC) staining at 24 h. One hour after MCA occlusion, the RLS with DSC-PWI was 24.9 +/- 6.3%, which was significantly larger than 17.6 +/- 4.8% with DWI (P < 0.01). At 24 h, the final RLS on TTC was 24.3 +/- 4.8%, which was comparable to 25.1 +/- 3.5%, 24.6 +/- 3.6% and 27.9 +/- 6.8% with T2WI, DWI and DSC-PWI respectively (P > 0.05). The fact that at 1 h after MCA occlusion only the displayed perfusion deficit was similar to the final infarct size on TTC (P > 0.05) suggests that early source images from DSC-PWI at 1.5T MR scanner is feasible to noninvasively predict the final infarct size in rat models of stroke.
Fabrication of Natural Uranium UO 2 Disks (Phase II): Texas A&M Work for Others Summary Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerczak, Tyler J.; Baldwin, Charles A.; Schmidlin, Joshua E.
The steps to fabricate natural UO 2 disks for an irradiation campaign led by Texas A&M University are outlined. The process was initiated with stoichiometry adjustment of parent, U 3O 8 powder. The next stage of sample preparation involved exploratory pellet pressing and sintering to achieve the desired natural UO 2 pellet densities. Ideal densities were achieved through the use of a bimodal powder size blend. The steps involved with disk fabrication are also presented, describing the coring and thinning process executed to achieve final dimensionality.
Detecting recurrence domains of dynamical systems by symbolic dynamics.
beim Graben, Peter; Hutt, Axel
2013-04-12
We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.
NASA Astrophysics Data System (ADS)
Guitar, María Agustina; Suárez, Sebastián; Prat, Orlando; Duarte Guigou, Martín; Gari, Valentina; Pereira, Gastón; Mücklich, Frank
2018-05-01
This work evaluates the effect of a destabilization treatment combined with a subcritical diffusion (SCD) and a subsequent quenching (Q) steps on precipitation of secondary carbides and their influence on the wear properties of HCCI (16%Cr). The destabilization of the austenite at high temperature leads to a final microstructure composed of eutectic and secondary carbides, with an M7C3 nature, embedded in a martensitic matrix. An improved wear resistance was observed in the SCD + Q samples in comparison with the Q one, which was attributed to the size of secondary carbides.
Collins, Gary S; Boutron, Isabelle; Yu, Ly-Mee; Cook, Jonathan; Shanyinde, Milensu; Wharton, Rose; Shamseer, Larissa; Altman, Douglas G
2014-01-01
Objective To investigate the effectiveness of open peer review as a mechanism to improve the reporting of randomised trials published in biomedical journals. Design Retrospective before and after study. Setting BioMed Central series medical journals. Sample 93 primary reports of randomised trials published in BMC-series medical journals in 2012. Main outcome measures Changes to the reporting of methodological aspects of randomised trials in manuscripts after peer review, based on the CONSORT checklist, corresponding peer reviewer reports, the type of changes requested, and the extent to which authors adhered to these requests. Results Of the 93 trial reports, 38% (n=35) did not describe the method of random sequence generation, 54% (n=50) concealment of allocation sequence, 50% (n=46) whether the study was blinded, 34% (n=32) the sample size calculation, 35% (n=33) specification of primary and secondary outcomes, 55% (n=51) results for the primary outcome, and 90% (n=84) details of the trial protocol. The number of changes between manuscript versions was relatively small; most involved adding new information or altering existing information. Most changes requested by peer reviewers had a positive impact on the reporting of the final manuscript—for example, adding or clarifying randomisation and blinding (n=27), sample size (n=15), primary and secondary outcomes (n=16), results for primary or secondary outcomes (n=14), and toning down conclusions to reflect the results (n=27). Some changes requested by peer reviewers, however, had a negative impact, such as adding additional unplanned analyses (n=15). Conclusion Peer reviewers fail to detect important deficiencies in reporting of the methods and results of randomised trials. The number of these changes requested by peer reviewers was relatively small. Although most had a positive impact, some were inappropriate and could have a negative impact on reporting in the final publication. PMID:24986891
The stability of self-organized 1-nonanethiol-capped gold nanoparticle monolayer
NASA Astrophysics Data System (ADS)
Jiang, Peng; Xie, Si-shen; Yao, Jian-nian; Pang, Shi-jin; Gao, Hong-jun
2001-08-01
1-Nonanethiol-protected gold nanoparticles with the size of about 2 nm have been prepared by a wet chemical method through choosing a suitable ratio of Au:S (2.5:1). Size selective precipitation of nanoparticles has been used to narrow their size distribution, which facilitates the formation of an ordered nanoparticle close-packed structure. A Fourier transform infrared investigation provides the evidence of the encapsulation of Au nanoparticles by 1-nonanethiol while an ultraviolet-visible spectrum shows a broad absorption around 520 nm, corresponding to surface plasmon band of Au nanoparticles. X-ray photoelectron spectroscopy of the samples demonstrates the metallic state of the gold (Au0) and the existence of sulfur (S). The data from x-ray powder diffraction measurements confirm that the gold nanoparticles have the same face-centred cubic crystalline structure as the bulk gold phase. Finally, transmission electron microscopy (TEM) characterization indicates that the size of the monodisperse colloidal gold nanoparticles is about 2 nm and they can self-organize to form a two-dimensional hexagonal close-packed structure after evaporating a concentrated drop of nanoparticles-toluene solution on a carbon-coated TEM copper grid.
Thyroid Nodule Size at Ultrasound as a Predictor of Malignancy and Final Pathologic Size.
Cavallo, Allison; Johnson, Daniel N; White, Michael G; Siddiqui, Saaduddin; Antic, Tatjana; Mathew, Melvy; Grogan, Raymon H; Angelos, Peter; Kaplan, Edwin L; Cipriani, Nicole A
2017-05-01
Thyroid-related mortality has remained constant despite the increasing incidence of thyroid carcinoma. Most thyroid nodules are benign; therefore, ultrasound and fine needle aspiration (FNA) are integral in cancer screening. We hypothesize that increased nodule size at ultrasound does not predict malignancy and correlation between nodule size at ultrasound and pathologic exam is good. Resected thyroids with preoperative ultrasounds were identified. Nodule size at ultrasound, FNA diagnosis by Bethesda category, size at pathologic examination, and final histologic diagnosis were recorded. Nodule characteristics at ultrasound and FNA diagnoses were correlated with gross characteristics and histologic diagnoses. Nodules for which correlation could not be established were excluded. Of 1003 nodules from 659 patients, 26% were malignant. Nodules <2 cm had the highest malignancy rate (∼30%). Risk was similar (∼20%) for nodules ≥2 cm. Of the 548 subject to FNA, 38% were malignant. Decreasing malignancy rates were observed with increasing size (57% for nodules <1 cm to 20% for nodules >6 cm). At ultrasound size cutoffs of 2, 3, 4, and 5 cm, smaller nodules had higher malignancy rates than larger nodules. Of the 455 not subject to FNA, 11% were malignant. Ultrasound size alone is a poor predictor of malignancy, but a relatively good predictor of final pathologic size (R 2 = 0.748), with less correlation at larger sizes. In nodules subject to FNA, false negative diagnoses were highest (6-8%) in nodules 3-6 cm, mostly due to encapsulated follicular variant of papillary carcinoma. Thyroid nodule size is inversely related to malignancy risk, as larger nodules have lower malignancy rates. However, the relationship of size to malignancy varies by FNA status. All nodules (regardless of FNA status) demonstrate a risk trough at ≥2 cm. Nodules subject to FNA show step-wise decline in malignancy rates by size, demonstrating that size alone should not be considered as an independent risk factor. Size at ultrasound shows relatively good correlation with final pathologic size. False negative rates are low in this series. Lesions with the appropriate constellation of clinical and radiographic findings should undergo FNA regardless of size. Both size and FNA diagnosis should influence the clinical decision-making process.
Bermudo, R; Abia, D; Mozos, A; García-Cruz, E; Alcaraz, A; Ortiz, Á R; Thomson, T M; Fernández, P L
2011-01-01
Introduction: Currently, final diagnosis of prostate cancer (PCa) is based on histopathological analysis of needle biopsies, but this process often bears uncertainties due to small sample size, tumour focality and pathologist's subjective assessment. Methods: Prostate cancer diagnostic signatures were generated by applying linear discriminant analysis to microarray and real-time RT–PCR (qRT–PCR) data from normal and tumoural prostate tissue samples. Additionally, after removal of biopsy tissues, material washed off from transrectal biopsy needles was used for molecular profiling and discriminant analysis. Results: Linear discriminant analysis applied to microarray data for a set of 318 genes differentially expressed between non-tumoural and tumoural prostate samples produced 26 gene signatures, which classified the 84 samples used with 100% accuracy. To identify signatures potentially useful for the diagnosis of prostate biopsies, surplus material washed off from routine biopsy needles from 53 patients was used to generate qRT–PCR data for a subset of 11 genes. This analysis identified a six-gene signature that correctly assigned the biopsies as benign or tumoural in 92.6% of the cases, with 88.8% sensitivity and 96.1% specificity. Conclusion: Surplus material from prostate needle biopsies can be used for minimal-size gene signature analysis for sensitive and accurate discrimination between non-tumoural and tumoural prostates, without interference with current diagnostic procedures. This approach could be a useful adjunct to current procedures in PCa diagnosis. PMID:22009027
NASA Astrophysics Data System (ADS)
Courel, Maykel; Sanchez, T. G.; Mathews, N. R.; Mathew, X.
2018-03-01
In this work, the processing of Cu2ZnGeS4 (CZGS) thin films by a thermal evaporation technique starting from CuS, GeS and ZnS precursors, and post-deposition thermal processing, is discussed. Batches of films with GeS layers of varying thicknesses are deposited in order to study the role of Ge concentration on the structural, morphological, optical and electrical properties of CZGS films. The formation of the CZGS compound with a tetragonal phase and a kesterite structure is confirmed for all samples using XRD and Raman studies. An improvement in crystallite size for Ge-poor films is also observed in the XRD analysis, which is in good agreement with the grain size observed in the cross section SEM image. Furthermore, it is found that the band-gap of CZGS film can be tailored in the range of 2.0-2.23 eV by varying Ge concentration. A comprehensive electrical characterization is also performed which demonstrates that slightly Ge-poor samples are described by the lowest grain boundary defect densities and the highest photosensitivity and mobility values. A study of the work function of CZGS samples with different Ge concentrations is also presented. Finally, a theoretical evaluation is presented, considering, under ideal conditions, the possible impact of these films on device performance. Based on the characterization results, it is concluded that Ge-poor CZGS samples deposited by thermal evaporation present better physical properties for device applications.
Chen, Qixuan; Li, Jingguang
2014-05-01
Many recent studies have examined the association between number acuity, which is the ability to rapidly and non-symbolically estimate the quantity of items appearing in a scene, and symbolic math performance. However, various contradictory results have been reported. To comprehensively evaluate the association between number acuity and symbolic math performance, we conduct a meta-analysis to synthesize the results observed in previous studies. First, a meta-analysis of cross-sectional studies (36 samples, N = 4705) revealed a significant positive correlation between these skills (r = 0.20, 95% CI = [0.14, 0.26]); the association remained after considering other potential moderators (e.g., whether general cognitive abilities were controlled). Moreover, a meta-analysis of longitudinal studies revealed 1) that number acuity may prospectively predict later math performance (r = 0.24, 95% CI = [0.11, 0.37]; 6 samples) and 2) that number acuity is retrospectively correlated to early math performance as well (r = 0.17, 95% CI = [0.07, 0.26]; 5 samples). In summary, these pieces of evidence demonstrate a moderate but statistically significant association between number acuity and math performance. Based on the estimated effect sizes, power analyses were conducted, which suggested that many previous studies were underpowered due to small sample sizes. This may account for the disparity between findings in the literature, at least in part. Finally, the theoretical and practical implications of our meta-analytic findings are presented, and future research questions are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.
Designing Case-Control Studies: Decisions About the Controls
Hodge, Susan E.; Subaran, Ryan L.; Weissman, Myrna M.; Fyer, Abby J.
2014-01-01
The authors quantified, first, the effect of misclassified controls (i.e., individuals who are affected with the disease under study but who are classified as controls) on the ability of a case-control study to detect an association between a disease and a genetic marker, and second, the effect of leaving misclassified controls in the study, as opposed to removing them (thus decreasing sample size). The authors developed an informativeness measure of a study’s ability to identify real differences between cases and controls. They then examined this measure’s behavior when there are no misclassified controls, when there are misclassified controls, and when there were misclassified controls but they have been removed from the study. The results show that if, for example, 10% of controls are misclassified, the study’s informativeness is reduced to approximately 81% of what it would have been in a sample with no misclassified controls, whereas if these misclassified controls are removed from the study, the informativeness is only reduced to about 90%, despite the reduced sample size. If 25% are misclassified, those figures become approximately 56% and 75%, respectively. Thus, leaving the misclassified controls in the control sample is worse than removing them altogether. Finally, the authors illustrate how insufficient power is not necessarily circumvented by having an unlimited number of controls. The formulas provided by the authors enable investigators to make rational decisions about removing misclassified controls or leaving them in. PMID:22854929
The University of Texas Institute for Geophysics Marine Geology and Geophysics Field Course
NASA Astrophysics Data System (ADS)
Davis, M. B.; Gulick, S. P.; Allison, M. A.; Goff, J. A.; Duncan, D. D.; Saustrup, S.
2011-12-01
The University of Texas Institute for Geophysics, part of the Jackson School of Geosciences, annually offers an intensive three-week marine geology and geophysics field course during the spring-summer intersession. Now in year five, the course provides hands-on instruction and training for graduate and upper-level undergraduate students in data acquisition, processing, interpretation, and visualization. Techniques covered include high-resolution seismic reflection, CHIRP sub-bottom profiling, multibeam bathymetry, sidescan sonar, several types of sediment coring, grab sampling, and the sedimentology of resulting seabed samples (e.g., core description, grain size analysis, x-radiography, etc.). Students seek to understand coastal and sedimentary processes of the Gulf Coast and continental shelf through application of these techniques in an exploratory mode. Students participate in an initial three days of classroom instruction designed to communicate geological context of the field area (which changes each year) along with theoretical and technical background on each field method. The class then travels to the Gulf Coast for a week of at-sea field work. In the field, students rotate between two small research vessels: one vessel, the 22' aluminum-hulled R/V Lake Itasca, owned and operated by UTIG, is used principally for multibeam bathymetry, sidescan sonar, and sediment sampling; the other, NOAA's R/V Manta or the R/V Acadiana, operated by the Louisiana Universities Marine Consortium, is used primarily for high-resolution seismic reflection, CHIRP sub-bottom profiling, multibeam bathymetry, gravity coring, and vibracoring. While at sea, students assist with survey design, learn instrumentation set up, acquisition parameters, data quality control, and safe instrument deployment and retrieval. In teams of three, students work in onshore field labs preparing sediment samples for particle size analysis and initial data processing. During the course's final week, teams return to the classroom where they integrate, interpret, and visualize data in a final project using industry-standard software such as Focus, Landmark, Caris, and Fledermaus. The course concludes with a series of professional-level final presentations and discussions in which students examine geologic history and/or sedimentary processes represented by the Gulf Coast continental shelf. With course completion, students report a greater understanding of marine geology and geophysics via the course's intensive, hands-on, team approach and low instructor to student ratio. This course satisfies field experience requirements for some degree programs and thus provides a unique alternative to land-based field courses.
NASA Astrophysics Data System (ADS)
Degrendele, C.; Okonski, K.; Melymuk, L.; Landlová, L.; Kukučka, P.; Audy, O.; Kohoutek, J.; Čupr, P.; Klánová, J.
2016-02-01
This study presents a comparison of seasonal variation, gas-particle partitioning, and particle-phase size distribution of organochlorine pesticides (OCPs) and current-use pesticides (CUPs) in air. Two years (2012/2013) of weekly air samples were collected at a background site in the Czech Republic using a high-volume air sampler. To study the particle-phase size distribution, air samples were also collected at an urban and rural site in the area of Brno, Czech Republic, using a cascade impactor separating atmospheric particulates according to six size fractions. Major differences were found in the atmospheric distribution of OCPs and CUPs. The atmospheric concentrations of CUPs were driven by agricultural activities while secondary sources such as volatilization from surfaces governed the atmospheric concentrations of OCPs. Moreover, clear differences were observed in gas-particle partitioning; CUP partitioning was influenced by adsorption onto mineral surfaces while OCPs were mainly partitioning to aerosols through absorption. A predictive method for estimating the gas-particle partitioning has been derived and is proposed for polar and non-polar pesticides. Finally, while OCPs and the majority of CUPs were largely found on fine particles, four CUPs (carbendazim, isoproturon, prochloraz, and terbuthylazine) had higher concentrations on coarse particles ( > 3.0 µm), which may be related to the pesticide application technique. This finding is particularly important and should be further investigated given that large particles result in lower risks from inhalation (regardless the toxicity of the pesticide) and lower potential for long-range atmospheric transport.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Image superresolution of cytology images using wavelet based patch search
NASA Astrophysics Data System (ADS)
Vargas, Carlos; García-Arteaga, Juan D.; Romero, Eduardo
2015-01-01
Telecytology is a new research area that holds the potential of significantly reducing the number of deaths due to cervical cancer in developing countries. This work presents a novel super-resolution technique that couples high and low frequency information in order to reduce the bandwidth consumption of cervical image transmission. The proposed approach starts by decomposing into wavelets the high resolution images and transmitting only the lower frequency coefficients. The transmitted coefficients are used to reconstruct an image of the original size. Additional details are added by iteratively replacing patches of the wavelet reconstructed image with equivalent high resolution patches from a previously acquired image database. Finally, the original transmitted low frequency coefficients are used to correct the final image. Results show a higher signal to noise ratio in the proposed method over simply discarding high frequency wavelet coefficients or replacing directly down-sampled patches from the image-database.
Evaluation of Elevation, Slope and Stream Network Quality of SPOT Dems
NASA Astrophysics Data System (ADS)
El Hage, M.; Simonetto, E.; Faour, G.; Polidori, L.
2012-07-01
Digital elevation models are considered the most useful data for dealing with geomorphology. The quality of these models is an important issue for users. This quality concerns position and shape. Vertical accuracy is the most assessed in many studies and shape quality is often neglected. However, both of them have an impact on the quality of the final results for a particular application. For instance, the elevation accuracy is required for orthorectification and the shape quality for geomorphology and hydrology. In this study, we deal with photogrammetric DEMs and show the importance of the quality assessment of both elevation and shape. For this purpose, we produce several SPOT HRV DEMs with the same dataset but with different template size, that is one of the production parameters from optical images. Then, we evaluate both elevation and shape quality. The shape quality is assessed with in situ measurements and analysis of slopes as an elementary shape and stream networks as a complex shape. We use the fractal dimension and sinuosity to evaluate the stream network shape. The results show that the elevation accuracy as well as the slope accuracy are affected by the template size. Indeed, an improvement of 1 m in the elevation accuracy and of 5 degrees in the slope accuracy has been obtained while changing this parameter. The elevation RMSE ranges from 7.6 to 8.6 m, which is smaller than the pixel size (10 m). For slope, the RMSE depends on the sampling distance. With a distance of 10 m, the minimum slope RMSE is 11.4 degrees. The stream networks extracted from these DEMs present a higher fractal dimension than the reference river. Moreover, the fractal dimension of the extracted networks has a negligible change according to the template size. Finally, the sinuosity of the stream networks is slightly affected by the change of the template size.
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
A coarse-to-fine approach for medical hyperspectral image classification with sparse representation
NASA Astrophysics Data System (ADS)
Chang, Lan; Zhang, Mengmeng; Li, Wei
2017-10-01
A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paudel, Nava Raj, E-mail: nrpaudel@yahoo.com; Shvydka, Diana; Parsai, E. Ishmael
Purpose: Gold nanoparticles (GNPs) are known to be effective mediators in microwave hyperthermia. Interaction with an electromagnetic field, large surface to volume ratio, and size quantization of nanoparticles (NPs) can lead to increased cell killing beyond pure heating effects. The purpose of this study is to explore the possibility of free radical generation by GNPs in aqueous media when they are exposed to a microwave field. Methods: A number of samples with 500 mM 5,5-dimethyl-1-pyrroline N-oxide (DMPO) in 20 ppm GNP colloidal suspensions were scanned with an electron paramagnetic resonance (EPR)/electron spin resonance spectrometer to generate and detect free radicals.more » A fixed (9.68 GHz) frequency microwave from the spectrometer has served for both generation and detection of radicals. EPR spectra obtained as first derivatives of intensity with the spectrometer were double integrated to get the free radical signal intensities. Power dependence of radical intensity was studied by applying various levels of microwave power (12.5, 49.7, and 125 mW) while keeping all other scan parameters the same. Free radical signal intensities from initial and final scans, acquired at the same power levels, were compared. Results: Hydroxyl radical (OH⋅) signal was found to be generated due to the exposure of GNP–DMPO colloidal samples to a microwave field. Intensity of OH⋅ signal thus generated at 12.5 mW microwave power for 2.8 min was close to the intensity of OH⋅ signal obtained from a water–DMPO sample exposed to 1.5 Gy ionizing radiation dose. For repeated scans, higher OH⋅ intensities were observed in the final scan for higher power levels applied between the initial and the final scans. Final intensities were higher also for a shorter time interval between the initial and the final scans. Conclusions: Our results observed for the first time demonstrate that GNPs generate OH⋅ radicals in aqueous media when they are exposed to a microwave field. If OH⋅ radicals can be generated close to deoxyribonucleic acid of cells by proper localization of NPs, NP-aided microwave hyperthermia can yield cell killing via both elevated temperature and free radical generation.« less
A novel property of gold nanoparticles: Free radical generation under microwave irradiation.
Paudel, Nava Raj; Shvydka, Diana; Parsai, E Ishmael
2016-04-01
Gold nanoparticles (GNPs) are known to be effective mediators in microwave hyperthermia. Interaction with an electromagnetic field, large surface to volume ratio, and size quantization of nanoparticles (NPs) can lead to increased cell killing beyond pure heating effects. The purpose of this study is to explore the possibility of free radical generation by GNPs in aqueous media when they are exposed to a microwave field. A number of samples with 500 mM 5,5-dimethyl-1-pyrroline N-oxide (DMPO) in 20 ppm GNP colloidal suspensions were scanned with an electron paramagnetic resonance (EPR)/electron spin resonance spectrometer to generate and detect free radicals. A fixed (9.68 GHz) frequency microwave from the spectrometer has served for both generation and detection of radicals. EPR spectra obtained as first derivatives of intensity with the spectrometer were double integrated to get the free radical signal intensities. Power dependence of radical intensity was studied by applying various levels of microwave power (12.5, 49.7, and 125 mW) while keeping all other scan parameters the same. Free radical signal intensities from initial and final scans, acquired at the same power levels, were compared. Hydroxyl radical (OH⋅) signal was found to be generated due to the exposure of GNP-DMPO colloidal samples to a microwave field. Intensity of OH⋅ signal thus generated at 12.5 mW microwave power for 2.8 min was close to the intensity of OH⋅ signal obtained from a water-DMPO sample exposed to 1.5 Gy ionizing radiation dose. For repeated scans, higher OH⋅ intensities were observed in the final scan for higher power levels applied between the initial and the final scans. Final intensities were higher also for a shorter time interval between the initial and the final scans. Our results observed for the first time demonstrate that GNPs generate OH⋅ radicals in aqueous media when they are exposed to a microwave field. If OH⋅ radicals can be generated close to deoxyribonucleic acid of cells by proper localization of NPs, NP-aided microwave hyperthermia can yield cell killing via both elevated temperature and free radical generation.
The Association between Penis Size and Sexual Health among Men Who Have Sex with Men
Grov, Christian; Parsons, Jeffrey T.; Bimbi, David S.
2010-01-01
Larger penis size has been equated with a symbol of power, stamina, masculinity, and social status. Yet, there has been little research among men who have sex with men assessing the association between penis size and social-sexual health. Survey data from a diverse sample of 1,065 men who have sex with men were used to explore the association between perceived penis size and a variety of psychosocial outcomes. Seven percent of men felt their penis was “below average,” 53.9% “average,” and 35.5% “above average.” Penis size was positively related to satisfaction with size and inversely related to lying about penis size (all p < .01). Size was unrelated to condom use, frequency of sex partners, HIV status, or recent diagnoses of HBV, HCV, gonorrhea/Chlamydia/urinary tract infections, and syphilis. Men with above average penises were more likely to report HPV and HSV-2 (Fisher’s exact p ≤ .05). Men with below average penises were significantly more likely to identify as “bottoms” (anal receptive) and men with above average penises were significantly more likely to identify as tops (anal insertive). Finally, men with below average penises fared significantly worse than other men on three measures of psychosocial adjustment. Though most men felt their penis size was average, many fell outside this “norm.” The disproportionate number of viral skin-to-skin STIs (HSV-2 and HPV) suggest size may play a role in condom slippage/breakage. Further, size played a significant role in sexual positioning and psychosocial adjustment. These data highlight the need to better understand the real individual-level consequences of living in a penis-centered society. PMID:19139986
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Decision tree methods: applications for classification and prediction.
Song, Yan-Yan; Lu, Ying
2015-04-25
Decision tree methodology is a commonly used data mining method for establishing classification systems based on multiple covariates or for developing prediction algorithms for a target variable. This method classifies a population into branch-like segments that construct an inverted tree with a root node, internal nodes, and leaf nodes. The algorithm is non-parametric and can efficiently deal with large, complicated datasets without imposing a complicated parametric structure. When the sample size is large enough, study data can be divided into training and validation datasets. Using the training dataset to build a decision tree model and a validation dataset to decide on the appropriate tree size needed to achieve the optimal final model. This paper introduces frequently used algorithms used to develop decision trees (including CART, C4.5, CHAID, and QUEST) and describes the SPSS and SAS programs that can be used to visualize tree structure.
Petroselli, Andrea; Giannotti, Maurizio; Marras, Tatiana; Allegrini, Elena
2017-06-03
In dry regions, water resources have become increasingly limited, and the use of alternative sources is considered one of the main strategies in sustainable water management. A highly viable alternative to commonly used water resources is treated municipal wastewater, which could strongly benefit from advanced and low-cost techniques for depuration, such as the integrated system of phytodepuration (ISP). The current manuscript investigates four Italian case studies with different sizes and characteristics. The raw wastewaters and final effluents were sampled on a monthly basis over a period of up to five years, allowing the quantification of the ISP performances. The results obtained show that the investigated plants are characterized by an average efficiency value of approximately 83% for chemical oxygen demand removal, 84% for biochemical oxygen demand, 89% for total nitrogen, 91% for total phosphorus, and 85% for total suspended solids. Moreover, for three of the case studies, the ISP final effluent is suitable for irrigation, and in the fourth case study, the final effluent can be released in surface water.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
Jung, Minsoo
2015-01-01
When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.
Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel
NASA Astrophysics Data System (ADS)
Xie, Yanmin
2011-08-01
Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.
A Time-Domain CMOS Oscillator-Based Thermostat with Digital Set-Point Programming
Chen, Chun-Chi; Lin, Shih-Hao
2013-01-01
This paper presents a time-domain CMOS oscillator-based thermostat with digital set-point programming [without a digital-to-analog converter (DAC) or external resistor] to achieve on-chip thermal management of modern VLSI systems. A time-domain delay-line-based thermostat with multiplexers (MUXs) was used to substantially reduce the power consumption and chip size, and can benefit from the performance enhancement due to the scaling down of fabrication processes. For further cost reduction and accuracy enhancement, this paper proposes a thermostat using two oscillators that are suitable for time-domain curvature compensation instead of longer linear delay lines. The final time comparison was achieved using a time comparator with a built-in custom hysteresis to generate the corresponding temperature alarm and control. The chip size of the circuit was reduced to 0.12 mm2 in a 0.35-μm TSMC CMOS process. The thermostat operates from 0 to 90 °C, and achieved a fine resolution better than 0.05 °C and an improved inaccuracy of ± 0.6 °C after two-point calibration for eight packaged chips. The power consumption was 30 μW at a sample rate of 10 samples/s. PMID:23385403
NASA Astrophysics Data System (ADS)
Rodríguez-Valentino, Camilo; Landaeta, Mauricio F.; Castillo-Hidalgo, Gissella; Bustos, Claudia A.; Plaza, Guido; Ojeda, F. Patricio
2015-09-01
The interannual variation (2010-2013) of larval abundance, growth and hatching patterns of the Chilean sand stargazer Sindoscopus australis (Pisces: Dactyloscopidae) was investigated through otolith microstructure analysis from samples collected nearshore (<500 m from shore) during austral late winter-early spring off El Quisco bay, central Chile. In the studied period, the abundance of larval stages in the plankton samples varied from 2.2 to 259.3 ind. 1000 m-3; larval abundance was similar between 2010 and 2011, and between 2012 and 2013, but increased significantly from 2011 to 2012. The estimated growth rates increased twice, from 0.09 to 0.21 mm day-1, between 2011 and 2013. Additionally, otolith size (radius, perimeter and area), related to body length of larvae, significantly decreased from 2010 to 2012, but increases significantly in 2013. Although the mean values of microincrement widths of sagitta otoliths were similar between 2010 and 2011 (around 0.6-0.7 μm), the interindividual variability increases in 2011 and 2013, suggesting large environmental variability experienced by larvae during these years. Finally, the hatching pattern of S. australis changed significantly from semi-lunar to lunar cycle after 2012.
Nonresponse patterns in the Federal Waterfowl Hunter Questionnaire Survey
Pendleton, G.W.
1992-01-01
I analyzed data from the 1984 and 1986 Federal Waterfowl Hunter Questionnaire Survey (WHQS) to estimate the rate of return of name and address contact cards, to evaluate the efficiency of the Survey's stratification scheme, and to investigate potential sources of bias due to nonresponse at the contact card and questionnaire stages of the Survey. Median response at the contact card stage was 0.200 in 1984 and 0.208 in 1986, but was lower than 0.100 for many sample post offices. Large portions of the intended sample contributed little to the final estimates in the Survey. Differences in response characteristics between post office size strata were detected, but size strata were confounded with contact card return rates; differences among geographic zones within states were more pronounced. Large biases in harvest and hunter activity due to nonresponse were not found; however, consistent smaller magnitude biases were found. Bias in estimates of the proportion of active hunters was the most pronounced effect of nonresponse. All of the sources of bias detected would produce overestimates of harvest and activity. Redesigning the WHQS, including use of a complete list of waterfowl hunters and resampling nonrespondents, would be needed to reduce nonresponse bias.
Porous silicon nanocrystals in a silica aerogel matrix
2012-01-01
Silicon nanoparticles of three types (oxide-terminated silicon nanospheres, micron-sized hydrogen-terminated porous silicon grains and micron-size oxide-terminated porous silicon grains) were incorporated into silica aerogels at the gel preparation stage. Samples with a wide range of concentrations were prepared, resulting in aerogels that were translucent (but weakly coloured) through to completely opaque for visible light over sample thicknesses of several millimetres. The photoluminescence of these composite materials and of silica aerogel without silicon inclusions was studied in vacuum and in the presence of molecular oxygen in order to determine whether there is any evidence for non-radiative energy transfer from the silicon triplet exciton state to molecular oxygen adsorbed at the silicon surface. No sensitivity to oxygen was observed from the nanoparticles which had partially H-terminated surfaces before incorporation, and so we conclude that the silicon surface has become substantially oxidised. Finally, the FTIR and Raman scattering spectra of the composites were studied in order to establish the presence of crystalline silicon; by taking the ratio of intensities of the silicon and aerogel Raman bands, we were able to obtain a quantitative measure of the silicon nanoparticle concentration independent of the degree of optical attenuation. PMID:22805684
Porous silicon nanocrystals in a silica aerogel matrix.
Amonkosolpan, Jamaree; Wolverson, Daniel; Goller, Bernhard; Polisski, Sergej; Kovalev, Dmitry; Rollings, Matthew; Grogan, Michael D W; Birks, Timothy A
2012-07-17
Silicon nanoparticles of three types (oxide-terminated silicon nanospheres, micron-sized hydrogen-terminated porous silicon grains and micron-size oxide-terminated porous silicon grains) were incorporated into silica aerogels at the gel preparation stage. Samples with a wide range of concentrations were prepared, resulting in aerogels that were translucent (but weakly coloured) through to completely opaque for visible light over sample thicknesses of several millimetres. The photoluminescence of these composite materials and of silica aerogel without silicon inclusions was studied in vacuum and in the presence of molecular oxygen in order to determine whether there is any evidence for non-radiative energy transfer from the silicon triplet exciton state to molecular oxygen adsorbed at the silicon surface. No sensitivity to oxygen was observed from the nanoparticles which had partially H-terminated surfaces before incorporation, and so we conclude that the silicon surface has become substantially oxidised. Finally, the FTIR and Raman scattering spectra of the composites were studied in order to establish the presence of crystalline silicon; by taking the ratio of intensities of the silicon and aerogel Raman bands, we were able to obtain a quantitative measure of the silicon nanoparticle concentration independent of the degree of optical attenuation.
Effect of Body Mass Index on Digital Templating for Total Hip Arthroplasty.
Sershon, Robert A; Diaz, Alejandro; Bohl, Daniel D; Levine, Brett R
2017-03-01
Digital templating is becoming more prevalent in orthopedics. Recent investigations report high accuracy using digital templating in total hip arthroplasty (THA); however, the effect of body mass index (BMI) on templating accuracy is not well described. Digital radiographs of 603 consecutive patients (645 hips) undergoing primary THA by a single surgeon were digitally templated using OrthoView (Jacksonville, FL). A 25-mm metallic sphere was used as a calibration marker. Preoperative digital hip templates were compared with the final implant size. Hips were stratified into groups based on BMI: BMI <30 (315), BMI 30-35 (132), BMI 35-40 (97), and BMI >40 (101). Accuracy between templating and final size did not vary by BMI for acetabular or femoral components. Digital templating was within 2 sizes of the final acetabular and femoral implants in 99.1% and 97.1% of cases, respectively. Digital templating is an effective means of predicting the final size of THA components. BMI does not appear to play a major role in altering THA digital templating accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.
Structural differences in enamel and dentin in human, bovine, porcine, and ovine teeth.
Ortiz-Ruiz, Antonio José; Teruel-Fernández, Juan de Dios; Alcolea-Rubio, Luis Alberto; Hernández-Fernández, Ana; Martínez-Beneyto, Yolanda; Gispert-Guirado, Francesc
2018-07-01
The aim was to study differences between crystalline nanostructures from the enamel and dentin of human, bovine, porcine, and ovine species. Dentine and enamel fragments extracted from sound human, bovine, porcine and ovine incisors and molars were mechanically ground up to a final particle size of <100μm. Samples were analyzed using X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and differential scanning calorimetry (DSC). Human enamel (HE) and dentin (HD) showed a-axis and c-axis lengths of the carbonate apatite (CAP) crystal lattice nearer to synthetic hydroxyapatite (SHA), which had the smallest size. Enamel crystal sizes were always higher than those of dentin for all species. HE and HD had the largest crystal, followed by bovine samples. Hydroxyapatites (HAs) in enamel had a higher crystallinity index (CI), CI Rietveld and CI FTIR, than the corresponding dentin of the same species. HE and HD had the highest CIs, followed by ovine enamel (OE). The changes in heat capacity that were nearest to values in human teeth during the glass transition (ΔCp) were in porcine specimens. There was a significant direct correlation between the size of the a-axis and the substitution by both type A and B carbonates. The size of the nanocrystals and the crystallinity (CI Rietveld y CI FTIR ) were significantly and negatively correlated with the proteic phase of all the substrates. There was a strongly positive correlation between the caloric capacity, the CIs and the crystal size and a strongly negative correlation between carbonates type A and B and proteins. There are differences in the organic and inorganic content of human, bovine, porcine and ovine enamels and dentins which should be taken into account when interpreting the results of studies using animal substrates as substitutes for human material. Copyright © 2018 Elsevier GmbH. All rights reserved.
Bunnell, David B.; Hale, R. Scott; Vanni, Michael J.; Stein, Roy A.
2006-01-01
Stock-recruit models typically use only spawning stock size as a predictor of recruitment to a fishery. In this paper, however, we used spawning stock size as well as larval density and key environmental variables to predict recruitment of white crappies Pomoxis annularis and black crappies P. nigromaculatus, a genus notorious for variable recruitment. We sampled adults and recruits from 11 Ohio reservoirs and larvae from 9 reservoirs during 1998-2001. We sampled chlorophyll as an index of reservoir productivity and obtained daily estimates of water elevation to determine the impact of hydrology on recruitment. Akaike's information criterion (AIC) revealed that Ricker and Beverton-Holt stock-recruit models that included chlorophyll best explained the variation in larval density and age-2 recruits. Specifically, spawning stock catch per effort (CPE) and chlorophyll explained 63-64% of the variation in larval density. In turn, larval density and chlorophyll explained 43-49% of the variation in age-2 recruit CPE. Finally, spawning stock CPE and chlorophyll were the best predictors of recruit CPE (i.e., 74-86%). Although larval density and recruitment increased with chlorophyll, neither was related to seasonal water elevation. Also, the AIC generally did not distinguish between Ricker and Beverton-Holt models. From these relationships, we concluded that crappie recruitment can be limited by spawning stock CPE and larval production when spawning stock sizes are low (i.e., CPE , 5 crappies/net-night). At higher levels of spawning stock sizes, spawning stock CPE and recruitment were less clearly related. To predict recruitment in Ohio reservoirs, managers should assess spawning stock CPE with trap nets and estimate chlorophyll concentrations. To increase crappie recruitment in reservoirs where recruitment is consistently poor, managers should use regulations to increase spawning stock size, which, in turn, should increase larval production and recruits to the fishery.
Wong, Joyce; Weber, Jill; Centeno, Barbara A; Vignesh, Shivakumar; Harris, Cynthia L; Klapman, Jason B; Hodul, Pamela
2013-01-01
Surgical resection for intraductal papillary mucinous neoplasm (IPMN) of the pancreas has increased over the last decade. While IPMN with main duct communication are generally recommended for resection, indications for resection of side-branch IPMN (SDIPMN) have been less clear. We reviewed our single institutional experience with SDIPMN and indications for resection. Patients who underwent resection for IPMN were identified from a prospectively maintained IRB-approved database. Patients with main pancreatic duct communication were excluded. Outcome, clinical and pathologic characteristics were correlated with endoscopic ultrasound (EUS) findings. From 2000 to 2010, 105 patients who underwent preoperative EUS evaluation and resection for SDIPMN were identified. The mean age was within the sixth decade of life, and there was a slight female predominance (55 vs. 45 %). The most common presenting symptom was abdominal pain (N = 47, 45 %), followed by jaundice (N = 24, 23 %) and weight loss (N = 24, 23 %). Only ten patients (10 %) were asymptomatic at presentation; seven (70 %) had suspicious features on EUS. Of the total cohort, few patients had intracystic septations (N = 27, 26 %) or presence of mural nodules (N = 2, 2 %) on EUS. Of 39 patients who had invasive pancreatic ductal adenocarcinoma (PDAC) on final pathology, EUS-fine needle aspiration (EUS-FNA) demonstrated malignancy in only 21 (54 %). An additional seven (18 %) had EUS-FNA findings of atypia or concern for mucinous neoplasm. EUS evaluation of cyst size was correlated with final pathology. Of 70 patients with EUS cyst size <3 cm, 12 (17 %) had a preoperative EUS diagnosis of malignancy. Final pathology revealed 24 (34 %) to have PDAC: 1 of 7 (14 %) patients with cyst size <1 cm, 2 of 19 (11 %) with cyst size 1-2 cm, and 21of 44 (48 %) with cyst size 2-3 cm. Fifteen of 35 (43 %) patients with cyst size >3 cm had PDAC on final pathology. Of the patients with cyst size <3 cm, 16 (23 %) had high-grade dysplasia on final pathology: 3 of 7 (43 %) with cyst size <1 cm, 3 of 19 (16 %) with cyst size 1-2 cm, and 10 of 44 (23 %) with cyst size 2-3 cm. Seven of 35 (20 %) patients with cyst size >3 cm had high-grade dysplasia on final pathology. Although overall survival (OS) at 48 months stratified by EUS cyst size did not significantly differ between groups, patients with PDAC on final pathology had significantly worse OS compared to noninvasive pathology. A total of eight patients (8 %) developed recurrent disease, all of whom had PDAC on final pathology. EUS is a helpful modality for the diagnostic evaluation of SDIPMN. Considering the high incidence of malignancy as well as high-grade dysplasia in SDIPMN greater than 2 cm, EUS features should be used in conjunction with other clinical criteria to guide management decisions. Patients with SDIPMN greater than 2 cm that do not undergo surgical resection may benefit from more intensive surveillance.
Rodrigues, Renata Costa Val; Zandi, Homan; Kristoffersen, Anne Karin; Enersen, Morten; Mdala, Ibrahimu; Ørstavik, Dag; Rôças, Isabela N; Siqueira, José F
2017-07-01
This clinical study evaluated the influence of the apical preparation size using nickel-titanium rotary instrumentation and the effect of a disinfectant on bacterial reduction in root canal-treated teeth with apical periodontitis. Forty-three teeth with posttreatment apical periodontitis were selected for retreatment. Teeth were randomly divided into 2 groups according to the irrigant used (2.5% sodium hypochlorite [NaOCl], n = 22; saline, n = 21). Canals were prepared with the Twisted File Adaptive (TFA) system (SybronEndo, Orange, CA). Bacteriological samples were taken before preparation (S1), after using the first instrument (S2), and then after the third instrument of the TFA system (S3). In the saline group, an additional sample was taken after final irrigation with 1% NaOCl (S4). DNA was extracted from the clinical samples and subjected to quantitative real-time polymerase chain reaction to evaluate the levels of total bacteria and streptococci. S1 from all teeth were positive for bacteria. Preparation to the first and third instruments from the TFA system showed a highly significant intracanal bacterial reduction regardless of the irrigant (P < .01). Apical enlargement to the third instrument caused a significantly higher decrease in bacterial counts than the first instrument (P < .01). Intergroup comparison revealed no significant difference between NaOCl and saline after the first instrument (P > .05). NaOCl was significantly better than saline after using the largest instrument in the series (P < .01). Irrespective of the type of irrigant, an increase in the apical preparation size significantly enhanced root canal disinfection. The disinfecting benefit of NaOCl over saline was significant at large apical preparation sizes. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
The FASES instrument development and experiment preparation for the ISS
NASA Astrophysics Data System (ADS)
Picker, Gerold; Gollinger, Klaus; Greger, Ralf; Dettmann, Jan; Winter, Josef; Dewandre, Thierry; Castiglione, Luigi; Vincent-Bonnieu, Sebastien; Liggieri, Libero; Clausse, Daniele; Antoni, Mickael
The FASES experiments target the investigation of the stability of emulsions. The main objec-tives are the study of the surfactant adsorption at the liquid / liquid interfaces, the interaction of the droplets as well as the behaviour of the liquid film between nearby drops. Particular focus is given to the dynamic droplet evolution during emulsion destabilisation. The results of the experiments shall support development of methods for the modelling of droplet size distri-butions, which are important to many industries using stable emulsions like food production, cosmetics and pharmaceutics or unstable emulsions as required for applications in waste water treatment or crude oil recovery. The development of the experimental instrumentation was initiated in 2002. The flight instru-ment hardware development was started in 2004 and finally the flight unit was completed in 2009. Currently the final flight preparation is proceeding targeting a launch to the International Space Station (ISS) with Progress 39P in September 2010. The experiment setup of the instrument is accommodated in a box type insert called Experiment Container (EC), which will be installed in the Fluid Science Laboratory part of the European Columbus module of the ISS. The EC is composed of two diagnostics instruments for the investigation of transparent and opaque liquid emulsion. The transparent emulsions will be subject to the experiment called "Investigations on drop/drop interactions in Transparent Emulsions" (ITEM). The opaque emulsion samples will be studied in the experiment called "Investigations on concentrated or opaque Emulsions and on Phase Inversions" (EMPI). The thermal conditioning unit (TCU) allows performing homogeneous thermalization, tem-perature sweeps, emulsion preparation by stirrer, and optical diagnostics with a scanning mi-croscope. The objective of the instrument is the 3D reconstruction of the emulsion droplet distribution in the liquid matrix in terms of the droplet sizes, location and their time depen-dent evolution. The TCU will be used for the stability experiment ITEM-S and the droplet freezing experiment ITEM-F. The Differential Scanning Calorimeter (DSC) will give an information about the evolution of the emulsion through the droplet size distribution and the dispersion state of the droplets within the emulsion during a controlled temperature sweep by measuring the latent heat of droplet freezing and melting during the EMPI experiments. For this purpose the calorimeter is equipped with a reference sample filled with a pure liquid matrix and a similar measurement sample filled with the specific emulsion under investigation. The differential heat flux between measurement sample and reference sample is measured with a sensitive heat flux sensor. Each instrument is serviced by a robotic sample stowage system, which accommodates in total 44 different ITEM and EMPI emulsion samples each filled with a specific composition of the emulsion. Currently the flight preparation is ongoing with particular focus on the preparation of the emulsion flight sample set and the instrument's operating parameters. The FASES flight instrument was developed by ASTRIUM Space Transportation Germany with support of RUAG Aerospace Wallisellen under ESA / ESTEC contract. The science team of FASES is supported by ESA/ESTEC (Microgravity Application Programme, AO99-052).
Opsahl, Stephen P.; Crow, Cassi L.
2014-01-01
During collection of streambed-sediment samples, additional samples from a subset of three sites (the SAR Elmendorf, SAR 72, and SAR McFaddin sites) were processed by using a 63-µm sieve on one aliquot and a 2-mm sieve on a second aliquot for PAH and n-alkane analyses. The purpose of analyzing PAHs and n-alkanes on a sample containing sand, silt, and clay versus a sample containing only silt and clay was to provide data that could be used to determine if these organic constituents had a greater affinity for silt- and clay-sized particles relative to sand-sized particles. The greater concentrations of PAHs in the <63-μm size-fraction samples at all three of these sites are consistent with a greater percentage of binding sites associated with fine-grained (<63 μm) sediment versus coarse-grained (<2 mm) sediment. The larger difference in total PAHs between the <2-mm and <63-μm size-fraction samples at the SAR Elmendorf site might be related to the large percentage of sand in the <2-mm size-fraction sample which was absent in the <63-μm size-fraction sample. In contrast, the <2-mm size-fraction sample collected from the SAR McFaddin site contained very little sand and was similar in particle-size composition to the <63-μm size-fraction sample.
Studies on the Presence of Mycotoxins in Biological Samples: An Overview
Escrivá, Laura; Font, Guillermina; Manyes, Lara
2017-01-01
Mycotoxins are fungal secondary metabolites with bioaccumulation levels leading to their carry-over into animal fluids, organs, and tissues. As a consequence, mycotoxin determination in biological samples from humans and animals has been reported worldwide. Since most mycotoxins show toxic effects at low concentrations and considering the extremely low levels present in biological samples, the application of reliable detection methods is required. This review summarizes the information regarding the studies involving mycotoxin determination in biological samples over the last 10 years. Relevant data on extraction methodology, detection techniques, sample size, limits of detection, and quantitation are presented herein. Briefly, liquid-liquid extraction followed by LC-MS/MS determination was the most common technique. The most analyzed mycotoxin was ochratoxin A, followed by zearalenone and deoxynivalenol—including their metabolites, enniatins, fumonisins, aflatoxins, T-2 and HT-2 toxins. Moreover, the studies were classified by their purpose, mainly focused on the development of analytical methodologies, mycotoxin biomonitoring, and exposure assessment. The study of tissue distribution, bioaccumulation, carry-over, persistence and transference of mycotoxins, as well as, toxicokinetics and ADME (absorption, distribution, metabolism and excretion) were other proposed goals for biological sample analysis. Finally, an overview of risk assessment was discussed. PMID:28820481
Locci, Antonio Mario; Cincotti, Alberto; Todde, Sara; Orrù, Roberto; Cao, Giacomo
2010-01-01
A novel methodology is proposed for investigating the effect of the pulsed electric current during the spark plasma sintering (SPS) of electrically conductive powders without potential misinterpretation of experimental results. First, ensemble configurations (geometry, size and material of the powder sample, die, plunger and spacers) are identified where the electric current is forced to flow only through either the sample or the die, so that the sample is heated either through the Joule effect or by thermal conduction, respectively. These ensemble configurations are selected using a recently proposed mathematical model of an SPS apparatus, which, once suitably modified, makes it possible to carry out detailed electrical and thermal analysis. Next, SPS experiments are conducted using the ensemble configurations theoretically identified. Using aluminum powders as a case study, we find that the temporal profiles of sample shrinkage, which indicate densification behavior, as well as the final density of the sample are clearly different when the electric current flows only through the sample or through the die containing it, whereas the temperature cycle and mechanical load are the same in both cases. PMID:27877354
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Social contagions on correlated multiplex networks
NASA Astrophysics Data System (ADS)
Wang, Wei; Cai, Meng; Zheng, Muhua
2018-06-01
The existence of interlayer degree correlations has been disclosed by abundant multiplex network analysis. However, how they impose on the dynamics of social contagions are remain largely unknown. In this paper, we propose a non-Markovian social contagion model in multiplex networks with inter-layer degree correlations to delineate the behavior spreading, and develop an edge-based compartmental (EBC) theory to describe the model. We find that multiplex networks promote the final behavior adoption size. Remarkably, it can be observed that the growth pattern of the final behavior adoption size, versus the behavioral information transmission probability, changes from discontinuous to continuous once decreasing the behavior adoption threshold in one layer. We finally unravel that the inter-layer degree correlations play a role on the final behavior adoption size but have no effects on the growth pattern, which is coincidence with our prediction by using the suggested theory.
Song, Gyuho; Kong, Tai; Dusoe, Keith J.; ...
2018-01-24
Mechanical properties of materials are strongly dependent of their atomic arrangement as well as the sample dimension, particularly at the micrometer length scale. Here in this study, we investigated the small-scale mechanical properties of single-crystalline YCd 6, which is a rational approximant of the icosahedral Y-Cd quasicrystal. In situ microcompression tests revealed that shear localization always occurs on {101} planes, but the shear direction is not constrained to any particular crystallographic directions. Furthermore, the yield strengths show the size dependence with a power law exponent of 0.4. Shear localization on {101} planes and size-dependent yield strength are explained in termsmore » of a large interplanar spacing between {101} planes and the energetics of shear localization process, respectively. The mechanical behavior of the icosahedral Y-Cd quasicrystal is also compared to understand the influence of translational symmetry on the shear localization process in both YCd 6 and Y-Cd quasicrystal micropillars. Finally, the results of this study will provide an important insight in a fundamental understanding of shear localization process in novel complex intermetallic compounds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Gyuho; Kong, Tai; Dusoe, Keith J.
Mechanical properties of materials are strongly dependent of their atomic arrangement as well as the sample dimension, particularly at the micrometer length scale. Here in this study, we investigated the small-scale mechanical properties of single-crystalline YCd 6, which is a rational approximant of the icosahedral Y-Cd quasicrystal. In situ microcompression tests revealed that shear localization always occurs on {101} planes, but the shear direction is not constrained to any particular crystallographic directions. Furthermore, the yield strengths show the size dependence with a power law exponent of 0.4. Shear localization on {101} planes and size-dependent yield strength are explained in termsmore » of a large interplanar spacing between {101} planes and the energetics of shear localization process, respectively. The mechanical behavior of the icosahedral Y-Cd quasicrystal is also compared to understand the influence of translational symmetry on the shear localization process in both YCd 6 and Y-Cd quasicrystal micropillars. Finally, the results of this study will provide an important insight in a fundamental understanding of shear localization process in novel complex intermetallic compounds.« less
Miller, B.; Jimenez, M.; Bridle, H.
2016-01-01
Inertial focusing is a microfluidic based separation and concentration technology that has expanded rapidly in the last few years. Throughput is high compared to other microfluidic approaches although sample volumes have typically remained in the millilitre range. Here we present a strategy for achieving rapid high volume processing with stacked and cascaded inertial focusing systems, allowing for separation and concentration of particles with a large size range, demonstrated here from 30 μm–300 μm. The system is based on curved channels, in a novel toroidal configuration and a stack of 20 devices has been shown to operate at 1 L/min. Recirculation allows for efficient removal of large particles whereas a cascading strategy enables sequential removal of particles down to a final stage where the target particle size can be concentrated. The demonstration of curved stacked channels operating in a cascaded manner allows for high throughput applications, potentially replacing filtration in applications such as environmental monitoring, industrial cleaning processes, biomedical and bioprocessing and many more. PMID:27808244
Statistical Analyses of Femur Parameters for Designing Anatomical Plates.
Wang, Lin; He, Kunjin; Chen, Zhengming
2016-01-01
Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.
Alvarez-Nemegyei, José; Buenfil-Rello, Fátima Annai; Pacheco-Pantoja, Elda Leonor
2016-01-01
Reports regarding the association between body composition and inflammatory activity in rheumatoid arthritis (RA) have consistently yielded contradictory results. To perform a systematic review on the association between overweight/obesity and inflammatory activity in RA. FAST approach: Article search (Medline, EBSCO, Cochrane Library), followed by abstract retrieval, full text review and blinded assessment of methodological quality for final inclusion. Because of marked heterogeneity in statistical approach and RA activity assessment method, a meta-analysis could not be done. Results are presented as qualitative synthesis. One hundred and nineteen reports were found, 16 of them qualified for full text review. Eleven studies (8,147 patients; n range: 37-5,161) approved the methodological quality filter and were finally included. Interobserver agreement for methodological quality score (ICC: 0.93; 95% CI: 0.82-0.98; P<.001) and inclusion/rejection decision (k 1.00, P>.001) was excellent. In all reports body composition was assessed by BMI; however a marked heterogeneity was found in the method used for RA activity assessment. A significant association between BMI and RA activity was found in 6 reports having larger mean sample size: 1,274 (range: 140-5,161). On the other hand, this association was not found in 5 studies having lower mean sample size: 100 (range: 7-150). The modulation of RA clinical status by body fat mass is suggested because a significant association was found between BMI and inflammatory activity in those reports with a trend toward higher statistical power. The relationship between body composition and clinical activity in RA requires be approached with further studies with higher methodological quality. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.
NASA Astrophysics Data System (ADS)
Ferrage, Eric; Hubert, Fabien; Tertre, Emmanuel; Delville, Alfred; Michot, Laurent J.; Levitz, Pierre
2015-06-01
Swelling clay minerals play a key role in the control of water and pollutant migration in natural media such as soils. Moreover, swelling clay particles' orientational properties in porous media have significant implications for the directional dependence of fluid transfer. Herein we investigate the ability to mimic the organization of particles in natural swelling-clay porous media using a three-dimensional sequential particle deposition procedure [D. Coelho, J.-F. Thovert, and P. M. Adler, Phys. Rev. E 55, 1959 (1997), 10.1103/PhysRevE.55.1959]. The algorithm considered is first used to simulate disk packings. Porosities of disk packings fall onto a single master curve when plotted against the orientational scalar order parameter value. This relation is used to validate the algorithm used in comparison with existing ones. The ellipticity degree of the particles is shown to have a negligible effect on the packing porosity for ratios ℓa/ℓb less than 1.5, whereas a significant increase in porosity is obtained for higher values. The effect of the distribution of the geometrical parameters (size, aspect ratio, and ellipticity degree) of particles on the final packing properties is also investigated. Finally, the algorithm is used to simulate particle packings for three size fractions of natural swelling-clay mineral powders. Calculated data regarding the distribution of the geometrical parameters and orientation of particles in porous media are successfully compared with experimental data obtained for the same samples. The results indicate that the obtained virtual porous media can be considered representative of natural samples and can be used to extract properties difficult to obtain experimentally, such as the anisotropic features of pore and solid phases in a system.
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans. The...
Microsystem strategies for sample preparation in biological detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.; Galambos, Paul C.; Bennett, Dawn Jonita
2005-03-01
The objective of this LDRD was to develop microdevice strategies for dealing with samples to be examined in biological detection systems. This includes three sub-components: namely, microdevice fabrication, sample delivery to the microdevice, and sample processing within the microdevice. The first component of this work focused on utilizing Sandia's surface micromachining technology to fabricate small volume (nanoliter) fluidic systems for processing small quantities of biological samples. The next component was to develop interfaces for the surface-micromachined silicon devices. We partnered with Micronics, a commercial company, to produce fluidic manifolds for sample delivery to our silicon devices. Pressure testing was completedmore » to examine the strength of the bond between the pressure-sensitive adhesive layer and the silicon chip. We are also pursuing several other methods, both in house and external, to develop polymer-based fluidic manifolds for packaging silicon-based microfluidic devices. The second component, sample processing, is divided into two sub-tasks: cell collection and cell lysis. Cell collection was achieved using dielectrophoresis, which employs AC fields to collect cells at energized microelectrodes, while rejecting non-cellular particles. Both live and dead Staph. aureus bacteria have been collected using RF frequency dielectrophoresis. Bacteria have been separated from polystyrene microspheres using frequency-shifting dielectrophoresis. Computational modeling was performed to optimize device separation performance, and to predict particle response to the dielectrophoretic traps. Cell lysis is continuing to be pursued using microactuators to mechanically disrupt cell membranes. Novel thermal actuators, which can generate larger forces than previously tested electrostatic actuators, have been incorporated with and tested with cell lysis devices. Significant cell membrane distortion has been observed, but more experiments need to be conducted to determine the effects of the observed distortion on membrane integrity and cell viability. Finally, we are using a commercial PCR DNA amplification system to determine the limits of detectable sample size, and to examine the amplification of DNA bound to microspheres. Our objective is to use microspheres as capture-and-carry chaperones for small molecules such as DNA and proteins, enabling the capture and concentration of the small molecules using dielectrophoresis. Current tests demonstrated amplification of DNA bound to micron-sized polystyrene microspheres using 20-50 microliter volume size reactions.« less
Onjong, Hillary Adawo; Wangoh, John; Njage, Patrick Murigu Kamau
2014-08-01
Fish processing plants still face microbial food safety-related product rejections and the associated economic losses, although they implement legislation, with well-established quality assurance guidelines and standards. We assessed the microbial performance of core control and assurance activities of fish exporting processors to offer suggestions for improvement using a case study. A microbiological assessment scheme was used to systematically analyze microbial counts in six selected critical sampling locations (CSLs). Nine small-, medium- and large-sized companies implementing current food safety management systems (FSMS) were studied. Samples were collected three times on each occasion (n = 324). Microbial indicators representing food safety, plant and personnel hygiene, and overall microbiological performance were analyzed. Microbiological distribution and safety profile levels for the CSLs were calculated. Performance of core control and assurance activities of the FSMS was also diagnosed using an FSMS diagnostic instrument. Final fish products from 67% of the companies were within the legally accepted microbiological limits. Salmonella was absent in all CSLs. Hands or gloves of workers from the majority of companies were highly contaminated with Staphylococcus aureus at levels above the recommended limits. Large-sized companies performed better in Enterobacteriaceae, Escherichia coli, and S. aureus than medium- and small-sized ones in a majority of the CSLs, including receipt of raw fish material, heading and gutting, and the condition of the fish processing tables and facilities before cleaning and sanitation. Fish products of 33% (3 of 9) of the companies and handling surfaces of 22% (2 of 9) of the companies showed high variability in Enterobacteriaceae counts. High variability in total viable counts and Enterobacteriaceae was noted on fish products and handling surfaces. Specific recommendations were made in core control and assurance activities associated with sampling locations showing poor performance.
Silva de Lima, Ana Lígia; Evers, Luc J W; Hahn, Tim; Bataille, Lauren; Hamilton, Jamie L; Little, Max A; Okuma, Yasuyuki; Bloem, Bastiaan R; Faber, Marjan J
2017-08-01
Despite the large number of studies that have investigated the use of wearable sensors to detect gait disturbances such as Freezing of gait (FOG) and falls, there is little consensus regarding appropriate methodologies for how to optimally apply such devices. Here, an overview of the use of wearable systems to assess FOG and falls in Parkinson's disease (PD) and validation performance is presented. A systematic search in the PubMed and Web of Science databases was performed using a group of concept key words. The final search was performed in January 2017, and articles were selected based upon a set of eligibility criteria. In total, 27 articles were selected. Of those, 23 related to FOG and 4 to falls. FOG studies were performed in either laboratory or home settings, with sample sizes ranging from 1 PD up to 48 PD presenting Hoehn and Yahr stage from 2 to 4. The shin was the most common sensor location and accelerometer was the most frequently used sensor type. Validity measures ranged from 73-100% for sensitivity and 67-100% for specificity. Falls and fall risk studies were all home-based, including samples sizes of 1 PD up to 107 PD, mostly using one sensor containing accelerometers, worn at various body locations. Despite the promising validation initiatives reported in these studies, they were all performed in relatively small sample sizes, and there was a significant variability in outcomes measured and results reported. Given these limitations, the validation of sensor-derived assessments of PD features would benefit from more focused research efforts, increased collaboration among researchers, aligning data collection protocols, and sharing data sets.
Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L
2017-02-06
Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Ngamjarus, Chetta; Chongsuvivatwong, Virasakdi; McNeil, Edward; Holling, Heinz
2017-01-01
Sample size determination usually is taught based on theory and is difficult to understand. Using a smartphone application to teach sample size calculation ought to be more attractive to students than using lectures only. This study compared levels of understanding of sample size calculations for research studies between participants attending a lecture only versus lecture combined with using a smartphone application to calculate sample sizes, to explore factors affecting level of post-test score after training sample size calculation, and to investigate participants’ attitude toward a sample size application. A cluster-randomized controlled trial involving a number of health institutes in Thailand was carried out from October 2014 to March 2015. A total of 673 professional participants were enrolled and randomly allocated to one of two groups, namely, 341 participants in 10 workshops to control group and 332 participants in 9 workshops to intervention group. Lectures on sample size calculation were given in the control group, while lectures using a smartphone application were supplied to the test group. Participants in the intervention group had better learning of sample size calculation (2.7 points out of maximnum 10 points, 95% CI: 24 - 2.9) than the participants in the control group (1.6 points, 95% CI: 1.4 - 1.8). Participants doing research projects had a higher post-test score than those who did not have a plan to conduct research projects (0.9 point, 95% CI: 0.5 - 1.4). The majority of the participants had a positive attitude towards the use of smartphone application for learning sample size calculation.
Grabitz, Clara R; Button, Katherine S; Munafò, Marcus R; Newbury, Dianne F; Pernet, Cyril R; Thompson, Paul A; Bishop, Dorothy V M
2018-01-01
Genetics and neuroscience are two areas of science that pose particular methodological problems because they involve detecting weak signals (i.e., small effects) in noisy data. In recent years, increasing numbers of studies have attempted to bridge these disciplines by looking for genetic factors associated with individual differences in behavior, cognition, and brain structure or function. However, different methodological approaches to guarding against false positives have evolved in the two disciplines. To explore methodological issues affecting neurogenetic studies, we conducted an in-depth analysis of 30 consecutive articles in 12 top neuroscience journals that reported on genetic associations in nonclinical human samples. It was often difficult to estimate effect sizes in neuroimaging paradigms. Where effect sizes could be calculated, the studies reporting the largest effect sizes tended to have two features: (i) they had the smallest samples and were generally underpowered to detect genetic effects, and (ii) they did not fully correct for multiple comparisons. Furthermore, only a minority of studies used statistical methods for multiple comparisons that took into account correlations between phenotypes or genotypes, and only nine studies included a replication sample or explicitly set out to replicate a prior finding. Finally, presentation of methodological information was not standardized and was often distributed across Methods sections and Supplementary Material, making it challenging to assemble basic information from many studies. Space limits imposed by journals could mean that highly complex statistical methods were described in only a superficial fashion. In summary, methods that have become standard in the genetics literature-stringent statistical standards, use of large samples, and replication of findings-are not always adopted when behavioral, cognitive, or neuroimaging phenotypes are used, leading to an increased risk of false-positive findings. Studies need to correct not just for the number of phenotypes collected but also for the number of genotypes examined, genetic models tested, and subsamples investigated. The field would benefit from more widespread use of methods that take into account correlations between the factors corrected for, such as spectral decomposition, or permutation approaches. Replication should become standard practice; this, together with the need for larger sample sizes, will entail greater emphasis on collaboration between research groups. We conclude with some specific suggestions for standardized reporting in this area.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Baasch, B.; Müller, H.; von Dobeneck, T.
2018-07-01
In this work, we present a new methodology to predict grain-size distributions from geophysical data. Specifically, electric conductivity and magnetic susceptibility of seafloor sediments recovered from electromagnetic profiling data are used to predict grain-size distributions along shelf-wide survey lines. Field data from the NW Iberian shelf are investigated and reveal a strong relation between the electromagnetic properties and grain-size distribution. The here presented workflow combines unsupervised and supervised machine-learning techniques. Non-negative matrix factorization is used to determine grain-size end-members from sediment surface samples. Four end-members were found, which well represent the variety of sediments in the study area. A radial basis function network modified for prediction of compositional data is then used to estimate the abundances of these end-members from the electromagnetic properties. The end-members together with their predicted abundances are finally back transformed to grain-size distributions. A minimum spatial variation constraint is implemented in the training of the network to avoid overfitting and to respect the spatial distribution of sediment patterns. The predicted models are tested via leave-one-out cross-validation revealing high prediction accuracy with coefficients of determination (R2) between 0.76 and 0.89. The predicted grain-size distributions represent the well-known sediment facies and patterns on the NW Iberian shelf and provide new insights into their distribution, transition and dynamics. This study suggests that electromagnetic benthic profiling in combination with machine learning techniques is a powerful tool to estimate grain-size distribution of marine sediments.
NASA Astrophysics Data System (ADS)
Baasch, B.; M"uller, H.; von Dobeneck, T.
2018-04-01
In this work we present a new methodology to predict grain-size distributions from geophysical data. Specifically, electric conductivity and magnetic susceptibility of seafloor sediments recovered from electromagnetic profiling data are used to predict grain-size distributions along shelf-wide survey lines. Field data from the NW Iberian shelf are investigated and reveal a strong relation between the electromagnetic properties and grain-size distribution. The here presented workflow combines unsupervised and supervised machine learning techniques. Nonnegative matrix factorisation is used to determine grain-size end-members from sediment surface samples. Four end-members were found which well represent the variety of sediments in the study area. A radial-basis function network modified for prediction of compositional data is then used to estimate the abundances of these end-members from the electromagnetic properties. The end-members together with their predicted abundances are finally back transformed to grain-size distributions. A minimum spatial variation constraint is implemented in the training of the network to avoid overfitting and to respect the spatial distribution of sediment patterns. The predicted models are tested via leave-one-out cross-validation revealing high prediction accuracy with coefficients of determination (R2) between 0.76 and 0.89. The predicted grain-size distributions represent the well-known sediment facies and patterns on the NW Iberian shelf and provide new insights into their distribution, transition and dynamics. This study suggests that electromagnetic benthic profiling in combination with machine learning techniques is a powerful tool to estimate grain-size distribution of marine sediments.
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
NASA Astrophysics Data System (ADS)
Muñoz-Sánchez, B.; Nieto-Maestre, J.; Iparraguirre-Torres, I.; Sánchez-García, J. A.; Julia, J. E.; García-Romero, A.
2016-05-01
The use of nanofluids (NFs) based on Solar Salt (SS) and nanoparticles (NPs), either as Thermal Energy Storage (TES) material or as Heat Transfer Fluid (HTF), is attracting great interest in recent years. Many authors [1,3] have reported important improvements on the thermophysical properties (specific heat capacity cp,thermal conductivity k) of NFs based on SS and ceramic NPs. These improvements would lead to important savings and better performance of TES facilities on new Concentrated Solar Power (CSP) plants due to lower quantities of material required and smaller storage tanks. To achieve these advantageous features in the final NFs, it is essential to avoid NP agglomeration during their preparation. Different synthesis procedures have been reported: mixing of solid NPs within a SS solution by means of ultrasounds [1-3], direct mixing of solid NPs and molten salt [4]. In this work, NFs based on SS and 1% by wt. of silica NPs were synthetized from a SS-water solution and a commercial water-silica NF called Ludox HS 30% (Sigma-Aldrich). The influence of the mixing water volume (MW) on the cp of NFs was evaluated. With this aim, the cp of these samples was measured by Differential Scanning Calorimetry (DSC) both in the solid and the liquid state. In addition, the distribution of sizes was measured during the whole preparation process by Dynamic Light Scattering (DLS). Further information about sizes and uniformity of the final NFs was obtained from Scanning Electron Microscopy (SEM) images. X-ray Diffraction (XRD) patterns of the SS and final NF were performed.
NASA GRC and MSFC Space-Plasma Arc Testing Procedures
NASA Technical Reports Server (NTRS)
Ferguson, Dale C.; Vayner, Boris V.; Galofaro, Joel T,; Hillard, G. Barry; Vaughn, Jason; Schneider, Todd
2005-01-01
Tests of arcing and current collection in simulated space plasma conditions have been performed at the NASA Glenn Research Center (GRC) in Cleveland, Ohio, for over 30 years and at the Marshall Space Flight Center (MSFC) in Huntsville, Alabama, for almost as long. During this period, proper test conditions for accurate and meaningful space simulation have been worked out, comparisons with actual space performance in spaceflight tests and with real operational satellites have been made, and NASA has achieved our own internal standards for test protocols. It is the purpose of this paper to communicate the test conditions, test procedures, and types of analysis used at NASA GRC and MSFC to the space environmental testing community at large, to help with international space-plasma arcing-testing standardization. To be discussed are: 1.Neutral pressures, neutral gases, and vacuum chamber sizes. 2. Electron and ion densities, plasma uniformity, sample sizes, and Debuy lengths. 3. Biasing samples versus self-generated voltages. Floating samples versus grounded. 4. Power supplies and current limits. Isolation of samples from power supplies during arcs. 5. Arc circuits. Capacitance during biased arc-threshold tests. Capacitance during sustained arcing and damage tests. Arc detection. Prevention sustained discharges during testing. 6. Real array or structure samples versus idealized samples. 7. Validity of LEO tests for GEO samples. 8. Extracting arc threshold information from arc rate versus voltage tests. 9. Snapover and current collection at positive sample bias. Glows at positive bias. Kapon (R) pyrolisis. 10. Trigger arc thresholds. Sustained arc thresholds. Paschen discharge during sustained arcing. 11. Testing for Paschen discharge threshold. Testing for dielectric breakdown thresholds. Testing for tether arcing. 12. Testing in very dense plasmas (ie thruster plumes). 13. Arc mitigation strategies. Charging mitigation strategies. Models. 14. Analysis of test results. Finally, the necessity of testing will be emphasized, not to the exclusion of modeling, but as part of a complete strategy for determining when and if arcs will occur, and preventing them from occurring in space.
NASA GRC and MSFC Space-Plasma Arc Testing Procedures
NASA Technical Reports Server (NTRS)
Ferguson, Dale C.a; Vayner, Boris V.; Galofaro, Joel T.; Hillard, G. Barry; Vaughn, Jason; Schneider, Todd
2005-01-01
Tests of arcing and current collection in simulated space plasma conditions have been performed at the NASA Glenn Research Center (GRC) in Cleveland, Ohio, for over 30 years and at the Marshall Space flight Center (MSFC) for almost as long. During this period, proper test conditions for accurate and meaningful space simulation have been worked out, comparisons with actual space performance in spaceflight tests and with real operational satellites have been made, and NASA has achieved our own internal standards for test protocols. It is the purpose of this paper to communicate the test conditions, test procedures, and types of analysis used at NASA GRC and MSFC to the space environmental testing community at large, to help with international space-plasma arcing testing standardization. To be discussed are: 1. Neutral pressures, neutral gases, and vacuum chamber sizes. 2. Electron and ion densities, plasma uniformity, sample sizes, and Debye lengths. 3. Biasing samples versus self-generated voltages. Floating samples versus grounded. 4. Power supplies and current limits. Isolation of samples from power supplies during arcs. Arc circuits. Capacitance during biased arc-threshold tests. Capacitance during sustained arcing and damage tests. Arc detection. Preventing sustained discharges during testing. 5. Real array or structure samples versus idealized samples. 6. Validity of LEO tests for GEO samples. 7. Extracting arc threshold information from arc rate versus voltage tests. 8 . Snapover and current collection at positive sample bias. Glows at positive bias. Kapton pyrolization. 9. Trigger arc thresholds. Sustained arc thresholds. Paschen discharge during sustained arcing. 10. Testing for Paschen discharge thresholds. Testing for dielectric breakdown thresholds. Testing for tether arcing. 11. Testing in very dense plasmas (ie thruster plumes). 12. Arc mitigation strategies. Charging mitigation strategies. Models. 13. Analysis of test results. Finally, the necessity of testing will be emphasized, not to the exclusion of modeling, but as part of a complete strategy for determining when and if arcs will occur, and preventing them from occurring in space.
Hoover, Kelli; Uzunovic, Adnan; Gething, Brad; Dale, Angela; Leung, Karen; Ostiguy, Nancy; Janowiak, John J.
2010-01-01
To reduce the risks associated with global transport of wood infested with pinewood nematode Bursaphelenchus xylophilus, microwave irradiation was tested at 14 temperatures in replicated wood samples to determine the temperature that would kill 99.9968% of nematodes in a sample of ≥ 100,000 organisms, meeting a level of efficacy of Probit 9. Treatment of these heavily infested wood samples (mean of > 1,000 nematodes/g of sapwood) produced 100% mortality at 56 °C and above, held for 1 min. Because this “brute force” approach to Probit 9 treats individual nematodes as the observational unit regardless of the number of wood samples it takes to treat this number of organisms, we also used a modeling approach. The best fit was to a Probit function, which estimated lethal temperature at 62.2 (95% confidence interval 59.0-70.0) °C. This discrepancy between the observed and predicted temperature to achieve Probit 9 efficacy may have been the result of an inherently limited sample size when predicting the true mean from the total population. The rate of temperature increase in the small wood samples (rise time) did not affect final nematode mortality at 56 °C. In addition, microwave treatment of industrial size, infested wood blocks killed 100% of > 200,000 nematodes at ≥ 56 °C held for 1 min in replicated wood samples. The 3rd-stage juvenile (J3) of the nematode, that is resistant to cold temperatures and desiccation, was abundant in our wood samples and did not show any resistance to microwave treatment. Regression analysis of internal wood temperatures as a function of surface temperature produced a regression equation that could be used with a relatively high degree of accuracy to predict internal wood temperatures, under the conditions of this study. These results provide strong evidence of the ability of microwave treatment to successfully eradicate B. xylophilus in infested wood at or above 56 °C held for 1 min. PMID:22736846
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
The Bose-Einstein correlations in CDFII experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovás, Lubomír
We present the results of a study of pmore » $$\\bar{p}$$ collisions at √s = 1.96 TeV collected by the CDF-II experiment at Tevatron collider. The Bose-Einstein correlations of the π ±π ± two boson system have been studied in the minimum-bias high-multiplicity events. The research was carried out on the sample at the size of 173761 events. The two pion correlations have been retrieved. The final results were corrected to the coulomb interactions. Two different reference samples were compared and discussed. A significant two-pion correlation enhancement near origin is observed. This enhancement effect has been used to evaluate the radius of the two-pion emitter source. We have used the TOF detector to distinguish between π and K mesons. The C 2(Q) function parameters have also been retrieved for the sample containing only tagged π mesons. A comparison between four different parametrizations based on two diff t theoretical approaches of the C 2(Q) function is given.« less
A hard-to-read font reduces the framing effect in a large sample.
Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik
2018-04-01
How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.
Kharazmi, Alireza; Faraji, Nastaran; Mat Hussin, Roslina; Saion, Elias; Yunus, W Mahmood Mat; Behzad, Kasra
2015-01-01
This work describes a fast, clean and low-cost approach to synthesize ZnS-PVA nanofluids consisting of ZnS nanoparticles homogeneously distributed in a PVA solution. The ZnS nanoparticles were formed by the electrostatic force between zinc and sulfur ions induced by gamma irradiation at a dose range from 10 to 50 kGy. Several experimental characterizations were conducted to investigate the physical and chemical properties of the samples. Fourier transform infrared spectroscopy (FTIR) was used to determine the chemical structure and bonding conditions of the final products, transmission electron microscopy (TEM) for determining the shape morphology and average particle size, powder X-ray diffraction (XRD) for confirming the formation and crystalline structure of ZnS nanoparticles, UV-visible spectroscopy for measuring the electronic absorption characteristics, transient hot wire (THW) and photoacoustic measurements for measuring the thermal conductivity and thermal effusivity of the samples, from which, for the first time, the values of specific heat and thermal diffusivity of the samples were then calculated.
Geochemistry of CI chondrites: Major and trace elements, and Cu and Zn Isotopes
NASA Astrophysics Data System (ADS)
Barrat, J. A.; Zanda, B.; Moynier, F.; Bollinger, C.; Liorzou, C.; Bayon, G.
2012-04-01
In order to check the heterogeneity of the CI chondrites and determine the average composition of this group of meteorites, we analyzed a series of six large chips (weighing between 0.6 and 1.2 g) of Orgueil prepared from five different stones. In addition, one sample from each of Ivuna and Alais was analyzed. Although the sizes of the chips used in this study were “large”, our results show evidence for minor chemical heterogeneity in Orgueil, particularly for alkali elements and U. After removal of one outlier sample, the spread of the results is considerably reduced. For most of the 46 elements analyzed in this study, the average composition calculated for Orgueil is in very good agreement with previous CI estimates. This average, obtained with a “large” mass of samples, is analytically homogeneous and is suitable for normalization purposes. Finally, the Cu and Zn isotopic ratios are homogeneously distributed within the CI parent body with a spread of less than 100 ppm per atomic mass unit (amu).
Elasticity of microscale volumes of viscoelastic soft matter by cavitation rheometry
NASA Astrophysics Data System (ADS)
Pavlovsky, Leonid; Ganesan, Mahesh; Younger, John G.; Solomon, Michael J.
2014-09-01
Measurement of the elastic modulus of soft, viscoelastic liquids with cavitation rheometry is demonstrated for specimens as small as 1 μl by application of elasticity theory and experiments on semi-dilute polymer solutions. Cavitation rheometry is the extraction of the elastic modulus of a material, E, by measuring the pressure necessary to create a cavity within it [J. A. Zimberlin, N. Sanabria-DeLong, G. N. Tew, and A. J. Crosby, Soft Matter 3, 763-767 (2007)]. This paper extends cavitation rheometry in three ways. First, we show that viscoelastic samples can be approximated with the neo-Hookean model provided that the time scale of the cavity formation is measured. Second, we extend the cavitation rheometry method to accommodate cases in which the sample size is no longer large relative to the cavity dimension. Finally, we implement cavitation rheometry to show that the theory accurately measures the elastic modulus of viscoelastic samples with volumes ranging from 4 ml to as low as 1 μl.
Fletcher, Jack M.; Stuebing, Karla K.; Barth, Amy E.; Miciak, Jeremy; Francis, David J.; Denton, Carolyn A.
2013-01-01
Purpose Agreement across methods for identifying students as inadequate responders or as learning disabled is often poor. We report (1) an empirical examination of final status (post-intervention benchmarks) and dual-discrepancy growth methods based on growth during the intervention and final status for assessing response to intervention; and (2) a statistical simulation of psychometric issues that may explain low agreement. Methods After a Tier 2 intervention, final status benchmark criteria were used to identify 104 inadequate and 85 adequate responders to intervention, with comparisons of agreement and coverage for these methods and a dual-discrepancy method. Factors affecting agreement were investigated using computer simulation to manipulate reliability, the intercorrelation between measures, cut points, normative samples, and sample size. Results Identification of inadequate responders based on individual measures showed that single measures tended not to identify many members of the pool of 104 inadequate responders. Poor to fair levels of agreement for identifying inadequate responders were apparent between pairs of measures In the simulation, comparisons across two simulated measures generated indices of agreement (kappa) that were generally low because of multiple psychometric issues inherent in any test. Conclusions Expecting excellent agreement between two correlated tests with even small amounts of unreliability may not be realistic. Assessing outcomes based on multiple measures, such as level of CBM performance and short norm-referenced assessments of fluency may improve the reliability of diagnostic decisions. PMID:25364090
Samson, Pamela; Keogan, Kathleen; Crabtree, Traves; Colditz, Graham; Broderick, Stephen; Puri, Varun; Meyers, Bryan
2017-01-01
To identify the variability of short- and long-term survival outcomes among closed Phase III randomized controlled trials with small sample sizes comparing SBRT (stereotactic body radiation therapy) and surgical resection in operable clinical Stage I non-small cell lung cancer (NSCLC) patients. Clinical Stage I NSCLC patients who underwent surgery at our institution meeting the inclusion/exclusion criteria for STARS (Randomized Study to Compare CyberKnife to Surgical Resection in Stage I Non-small Cell Lung Cancer), ROSEL (Trial of Either Surgery or Stereotactic Radiotherapy for Early Stage (IA) Lung Cancer), or both were identified. Bootstrapping analysis provided 10,000 iterations to depict 30-day mortality and three-year overall survival (OS) in cohorts of 16 patients (to simulate the STARS surgical arm), 27 patients (to simulate the pooled surgical arms of STARS and ROSEL), and 515 (to simulate the goal accrual for the surgical arm of STARS). From 2000 to 2012, 749/873 (86%) of clinical Stage I NSCLC patients who underwent resection were eligible for STARS only, ROSEL only, or both studies. When patients eligible for STARS only were repeatedly sampled with a cohort size of 16, the 3-year OS rates ranged from 27 to 100%, and 30-day mortality varied from 0 to 25%. When patients eligible for ROSEL or for both STARS and ROSEL underwent bootstrapping with n=27, the 3-year OS ranged from 46 to 100%, while 30-day mortality varied from 0 to 15%. Finally, when patients eligible for STARS were repeatedly sampled in groups of 515, 3-year OS narrowed to 70-85%, with 30-day mortality varying from 0 to 4%. Short- and long-term survival outcomes from trials with small sample sizes are extremely variable and unreliable for extrapolation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mair, R. W.; Sen, P. N.; Hurlimann, M. D.; Patz, S.; Cory, D. G.; Walsworth, R. L.
2002-01-01
We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Pade approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Pade interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Pade length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).
Mair, R W; Sen, P N; Hürlimann, M D; Patz, S; Cory, D G; Walsworth, R L
2002-06-01
We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Padé approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Padé interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Padé length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).
Pfister, Hugo; Morzadec, Claudie; Le Cann, Pierre; Madec, Laurent; Lecureur, Valérie; Chouvet, Martine; Jouneau, Stéphane; Vernhet, Laurent
2017-10-01
Dairy working increases the prevalence of lower airway respiratory diseases, especially COPD and asthma. Epidemiological studies have reported that chronic inhalation of organic dusts released during specific daily tasks could represent a major risk factor for development of these pathologies in dairy workers. Knowledge on size, nature and biological activity of such organic dusts remain however limited. To compare size distribution, microbial composition and cellular effects of dusts liberated by the spreading of straw bedding in five French dairy farms located in Brittany. Mechanized distribution of straw bedding generated a cloud of inhalable dusts in the five dairy farms' barns. Thoracic particles having a 3-7.5µm size constituted 58.9-68.3% of these dusts. Analyses of thoracic dusts by next generation sequencing showed that the microbial dust composition differed between the five French farms, although Actinobacteria, Bacteroidetes, Firmicutes and Proteobacteria represent more than 97.5% of the bacterial phyla detected in each sample. Several bacteria genera comprising of human pathogenic species, such as Pseudomonas, Staphylococcus, Thermoactinomyces or Saccharopolyspora were identified. Cladosporium and Alternaria fungal genera, which are potent environmental determinants of respiratory symptoms, were detected in dusts collected in the five farms and their levels reached 15.5-51.1% and 9-24.7% of assignable fungal sequences in each sample, respectively. Finally, all dust samples significantly and strongly increased the expression of the pro-inflammatory TNF-α, IL-1β, IL-6 and IL-8 cytokines at both mRNA and protein levels in human monocyte-derived macrophages. Their effects were dose-dependent and detectable from 1µg/ml. The intensity of the macrophage responses however differed according to the samples. Our results strengthen the hypothesis that organic dusts released during the distribution of straw bedding are mainly constituted of thoracic particles which are small enough to deposit on lower bronchial epithelium of dairy farmers and induce inflammation. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jomekian, A.; Faculty of Chemical Engineering, Iran University of Science and Technology; Behbahani, R.M., E-mail: behbahani@put.ac.ir
Ultra porous ZIF-8 particles synthesized using PEO/PA6 based poly(ether-block-amide) (Pebax 1657) as structure directing agent. Structural properties of ZIF-8 samples prepared under different synthesis parameters were investigated by laser particle size analysis, XRD, N{sub 2} adsorption analysis, BJH and BET tests. The overall results showed that: (1) The mean pore size of all ZIF-8 samples increased remarkably (from 0.34 nm to 1.1–2.5 nm) compared to conventionally synthesized ZIF-8 samples. (2) Exceptional BET surface area of 1869 m{sup 2}/g was obtained for a ZIF-8 sample with mean pore size of 2.5 nm. (3) Applying high concentrations of Pebax 1657 to themore » synthesis solution lead to higher surface area, larger pore size and smaller particle size for ZIF-8 samples. (4) Both, Increase in temperature and decrease in molar ratio of MeIM/Zn{sup 2+} had increasing effect on ZIF-8 particle size, pore size, pore volume, crystallinity and BET surface area of all investigated samples. - Highlights: • The pore size of ZIF-8 samples synthesized with Pebax 1657 increased remarkably. • The BET surface area of 1869 m{sup 2}/gr obtained for a ZIF-8 synthesized sample with Pebax. • Increase in temperature had increasing effect on textural properties of ZIF-8 samples. • Decrease in MeIM/Zn{sup 2+} had increasing effect on textural properties of ZIF-8 samples.« less
A Composite Source Model With Fractal Subevent Size Distribution
NASA Astrophysics Data System (ADS)
Burjanek, J.; Zahradnik, J.
A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.
Cao, Chunyan; Xie, An; Noh, Hyeon Mi; Jeong, Jung Hyun
2016-08-01
Using a hydrothermal method, Ce(3+) /Tb(3+) non-/single-/co-doped K-Lu-F materials have been synthesized. The X-ray diffraction (XRD) results suggest that the Ce(3+) and/or Tb(3+) doping had great effects on the crystalline phases of the final samples. The field emission scanning electron microscopy (FE-SEM) images indicated that the samples were in hexagonal disk or polyhedron morphologies in addition to some nanoparticles, which also indicated that the doping also had great effects on the sizes and the morphologies of the samples. The energy-dispersive spectroscopy (EDS) patterns illustrated the constituents of different samples. The enhanced emissions of Tb(3+) were observed in the Ce(3+) /Tb(3+) co-doped K-Lu-F materials. The energy transfer (ET) efficiency ηT were calculated based on the fluorescence yield. The ET mechanism from Ce(3+) to Tb(3+) was confirmed to be the dipole-quadrupole interaction inferred from the theoretical analysis and the experimental data. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Wilson, Laura C; Scarpa, Angela
2013-01-01
Although substantial literature discusses sensation seeking as playing a role in the relationship between baseline heart rate and aggression, few published studies have tested the relationships among these variables. Furthermore, most prior studies have focused on risk factors of aggression in men and have largely ignored this issue in women. Two samples (n = 104; n = 99) of young adult women completed measures of resting heart rate, sensation seeking, and aggression. Across the two samples of females there was no evidence for the relationships of baseline heart rate with sensation seeking or with aggression that has been consistently shown in males. Boredom susceptibility and disinhibition subscales of sensation seeking were consistently significantly correlated with aggression. The lack of significance and the small effect sizes indicate that other mechanisms are also at work in affecting aggression in young adult women. Finally, it is important to consider the type of sensation seeking in relation to aggression, as only boredom susceptibility and disinhibition were consistently replicated across samples. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
White, Nathan; Reeves, Tom; Cheese, Phil; Stennett, Christopher; Wood, Andrew; Cook, Malcolm; Syanco Ltd Team; Cranfield University Team; DE&S, MoD Abbey Wood Team
2017-06-01
Thin, cylindrical samples of HMX/HTPB formulations with solids loadings from 85-95% by mass have been heated at 1oC/minute until a reaction occurred in the new dual window cook-off test vehicle. The test vehicle has captured the response of these formulations, and shown the influence of variables such as confinement, heating rate and sample size. Live imaging of the heated samples revealed that as with pure nitramine samples, three distinct stages of change take place during heating; phase changes, melting and slow, flameless decomposition with production of gaseous intermediates and finally burning with a luminous flame of the gaseous intermediates. In addition, the binder appears to undergo decomposition before the HMX, darkening along the edge closest to the thermal input before the HMX melts. Prior to violent reaction, flame speeds were measured at approximately 30m/s for high confinement, which reduces by 2-3 orders of magnitude when confinement is lowered. The melting point of HMX has been observed below the widely reported value at 220oC, and requires further investigation.
NASA Astrophysics Data System (ADS)
Radulović, Vladimir; Kolšek, Aljaž; Fauré, Anne-Laure; Pottin, Anne-Claire; Pointurier, Fabien; Snoj, Luka
2018-03-01
The Fission Track Thermal Ionization Mass Spectrometry (FT-TIMS) method is considered as the reference method for particle analysis in the field of nuclear Safeguards for measurements of isotopic compositions (fissile material enrichment levels) in micrometer-sized uranium particles collected in nuclear facilities. An integral phase in the method is the irradiation of samples in a very well thermalized neutron spectrum. A bilateral collaboration project was carried out between the Jožef Stefan Institute (JSI, Slovenia) and the Commissariat à l'Énergie Atomique et aux Énergies Alternatives (CEA, France) to determine whether the JSI TRIGA reactor could be used for irradiations of samples for the FT-TIMS method. This paper describes Monte Carlo simulations, experimental activation measurements and test irradiations performed in the JSI TRIGA reactor, firstly to determine the feasibility, and secondly to design and qualify a purpose-built heavy water based irradiation device for FT-TIMS samples. The final device design has been shown experimentally to meet all the required performance specifications.
ERIC Educational Resources Information Center
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Sample size calculations for case-control studies
This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Nielson, Ryan M.; Gray, Brian R.; McDonald, Lyman L.; Heglund, Patricia J.
2011-01-01
Estimation of site occupancy rates when detection probabilities are <1 is well established in wildlife science. Data from multiple visits to a sample of sites are used to estimate detection probabilities and the proportion of sites occupied by focal species. In this article we describe how site occupancy methods can be applied to estimate occupancy rates of plants and other sessile organisms. We illustrate this approach and the pitfalls of ignoring incomplete detection using spatial data for 2 aquatic vascular plants collected under the Upper Mississippi River's Long Term Resource Monitoring Program (LTRMP). Site occupancy models considered include: a naïve model that ignores incomplete detection, a simple site occupancy model assuming a constant occupancy rate and a constant probability of detection across sites, several models that allow site occupancy rates and probabilities of detection to vary with habitat characteristics, and mixture models that allow for unexplained variation in detection probabilities. We used information theoretic methods to rank competing models and bootstrapping to evaluate the goodness-of-fit of the final models. Results of our analysis confirm that ignoring incomplete detection can result in biased estimates of occupancy rates. Estimates of site occupancy rates for 2 aquatic plant species were 19–36% higher compared to naive estimates that ignored probabilities of detection <1. Simulations indicate that final models have little bias when 50 or more sites are sampled, and little gains in precision could be expected for sample sizes >300. We recommend applying site occupancy methods for monitoring presence of aquatic species.
The U.S. Geological Survey coal quality (COALQUAL) database version 3.0
Palmer, Curtis A.; Oman, Charles L.; Park, Andy J.; Luppens, James A.
2015-12-21
Because of database size limits during the development of COALQUAL Version 1.3, many analyses of individual bench samples were merged into whole coal bed averages. The methodology for making these composite intervals was not consistent. Size limits also restricted the amount of georeferencing information and forced removal of qualifier notations such as "less than detection limit" (<) information, which can cause problems when using the data. A review of the original data sheets revealed that COALQUAL Version 2.0 was missing information that was needed for a complete understanding of a coal section. Another important database issue to resolve was the USGS "remnant moisture" problem. Prior to 1998, tests for remnant moisture (as-determined moisture in the sample at the time of analysis) were not performed on any USGS major, minor, or trace element coal analyses. Without the remnant moisture, it is impossible to convert the analyses to a usable basis (as-received, dry, etc.). Based on remnant moisture analyses of hundreds of samples of different ranks (and known residual moisture) reported after 1998, it was possible to develop a method to provide reasonable estimates of remnant moisture for older data to make it more useful in COALQUAL Version 3.0. In addition, COALQUAL Version 3.0 is improved by (1) adding qualifiers, including statistical programming to deal with the qualifiers; (2) clarifying the sample compositing problems; and (3) adding associated samples. Version 3.0 of COALQUAL also represents the first attempt to incorporate data verification by mathematically crosschecking certain analytical parameters. Finally, a new database system was designed and implemented to replace the outdated DOS program used in earlier versions of the database.
Dong, Bin; Li, Guang; Yang, Xiaogang; Chen, Luming; Chen, George Z
2018-04-01
(NH 4 )Fe 2 (PO 4 ) 2 (OH)·2H 2 O samples with different morphology are successfully synthesized via two-step synthesis route - ultrasonic-intensified impinging stream pre-treatment followed by hydrothermal treatment (UIHT) method. The effects of the adoption of ultrasonic-intensified impinging stream pre-treatment, reagent concentration (C), pH value of solution and hydrothermal reaction time (T) on the physical and chemical properties of the synthesised (NH 4 )Fe 2 (PO 4 ) 2 (OH)·2H 2 O composites and FePO 4 particles were systematically investigated. Nano-seeds were firstly synthesized using the ultrasonic-intensified T-mixer and these nano-seeds were then transferred into a hydrothermal reactor, heated at 170 °C for 4 h. The obtained samples were characterized by utilising XRD, BET, TG-DTA, SEM, TEM, Mastersizer 3000 and FTIR, respectively. The experimental results have indicated that the particle size and morphology of the obtained samples are remarkably affected by the use of ultrasonic-intensified impinging stream pre-treatment, hydrothermal reaction time, reagent concentration, and pH value of solution. When such (NH 4 )Fe 2 (PO 4 ) 2 (OH)·2H 2 O precursor samples were transformed to FePO 4 products after sintering at 650 °C for 10 h, the SEM images have clearly shown that both the precursor and the final product still retain their monodispersed spherical microstructures with similar particle size of about 3 μm when the samples are synthesised at the optimised condition. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Linzhi; Zhao Jingzhe, E-mail: zhaojz@hnu.edu.cn; Wang Yi
Tungsten oxide hydrate (WO{sub 3}.H{sub 2}O) nanoplates and flower-like assemblies were successfully synthesized via a simple aqueous method. The effects of reaction parameters in solution on the preparation were studied. Nanoplates and nanoflowers can be selectively prepared by changing the amount of H{sub 2}C{sub 2}O{sub 4}. In-situ assembly of nanoplates to nanoflowers was also proposed for the formation of assembled nanostructures. In addition, the reaction time and temperature have important effects on the sizes of the as-obtained samples. Crystal structure, morphology, and composition of final nanostructures were characterized by X-ray diffraction (XRD) and scanning electron microscopy (SEM). Optical properties ofmore » the synthesized samples and the growth mechanism were studied by UV-vis detection. Degradation experiments of Rhodamine B (RhB) were also performed on samples of nanoplates and nanoflowers under visible light illumination. Nanoflower sample exhibited preferable photocatalytic property to nanoplate sample. - Graphical abstract: The oxalic acid has a key role for the structure of WO{sub 3}.H{sub 2}O evolution from plates to flowers and the dehydration process of WO{sub 3}.2H{sub 2}O to WO{sub 3}.H{sub 2}O. Highlights: > Tungsten oxides hydrate was synthesized via a simple aqueous method. > The size of WO{sub 3}.H{sub 2}O was controlled by the reaction time and temperature. > The assembly of WO{sub 3}.H{sub 2}O nanoplates to nanoflowers was achieved with higher H{sub 2}C{sub 2}O{sub 4}/Na{sub 2}WO{sub 4} ratio. > Oxalic acid has a key role in the dehydration process of WO{sub 3}.2H{sub 2}O to WO{sub 3}.H{sub 2}O.« less
Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan; Fisman, David N
2017-01-01
Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. We used the previously described "incidence decay with exponential adjustment" (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015-2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. The 2015-2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R 0 approximately 1.4 for all fits). Lower R 0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
Maisano Delser, Pierpaolo; Corrigan, Shannon; Hale, Matthew; Li, Chenhong; Veuille, Michel; Planes, Serge; Naylor, Gavin; Mona, Stefano
2016-01-01
Population genetics studies on non-model organisms typically involve sampling few markers from multiple individuals. Next-generation sequencing approaches open up the possibility of sampling many more markers from fewer individuals to address the same questions. Here, we applied a target gene capture method to deep sequence ~1000 independent autosomal regions of a non-model organism, the blacktip reef shark (Carcharhinus melanopterus). We devised a sampling scheme based on the predictions of theoretical studies of metapopulations to show that sampling few individuals, but many loci, can be extremely informative to reconstruct the evolutionary history of species. We collected data from a single deme (SID) from Northern Australia and from a scattered sampling representing various locations throughout the Indian Ocean (SCD). We explored the genealogical signature of population dynamics detected from both sampling schemes using an ABC algorithm. We then contrasted these results with those obtained by fitting the data to a non-equilibrium finite island model. Both approaches supported an Nm value ~40, consistent with philopatry in this species. Finally, we demonstrate through simulation that metapopulations exhibit greater resilience to recent changes in effective size compared to unstructured populations. We propose an empirical approach to detect recent bottlenecks based on our sampling scheme. PMID:27651217
Maisano Delser, Pierpaolo; Corrigan, Shannon; Hale, Matthew; Li, Chenhong; Veuille, Michel; Planes, Serge; Naylor, Gavin; Mona, Stefano
2016-09-21
Population genetics studies on non-model organisms typically involve sampling few markers from multiple individuals. Next-generation sequencing approaches open up the possibility of sampling many more markers from fewer individuals to address the same questions. Here, we applied a target gene capture method to deep sequence ~1000 independent autosomal regions of a non-model organism, the blacktip reef shark (Carcharhinus melanopterus). We devised a sampling scheme based on the predictions of theoretical studies of metapopulations to show that sampling few individuals, but many loci, can be extremely informative to reconstruct the evolutionary history of species. We collected data from a single deme (SID) from Northern Australia and from a scattered sampling representing various locations throughout the Indian Ocean (SCD). We explored the genealogical signature of population dynamics detected from both sampling schemes using an ABC algorithm. We then contrasted these results with those obtained by fitting the data to a non-equilibrium finite island model. Both approaches supported an Nm value ~40, consistent with philopatry in this species. Finally, we demonstrate through simulation that metapopulations exhibit greater resilience to recent changes in effective size compared to unstructured populations. We propose an empirical approach to detect recent bottlenecks based on our sampling scheme.
NASA Astrophysics Data System (ADS)
Xu, H. J.; Xu, Y. B.; Jiao, H. T.; Cheng, S. F.; Misra, R. D. K.; Li, J. P.
2018-05-01
Fe-6.5 wt% Si steel hot bands with different initial grain size and texture were obtained through different annealing treatment. These bands were then warm rolled and annealed. An analysis on the evolution of microstructure and texture, particularly the formation of recrystallization texture was studied. The results indicated that initial grain size and texture had a significant effect on texture evolution and magnetic properties. Large initial grains led to coarse deformed grains with dense and long shear bands after warm rolling. Such long shear bands resulted in growth advantage for {1 1 3} 〈3 6 1〉 oriented grains during recrystallization. On the other hand, sharp {11 h} 〈1, 2, 1/h〉 (α∗-fiber) texture in the coarse-grained sample led to dominant {1 1 2} 〈1 1 0〉 texture after warm rolling. Such {1 1 2} 〈1 1 0〉 deformed grains provided massive nucleation sites for {1 1 3} 〈3 6 1〉 oriented grains during subsequent recrystallization. These {1 1 3} 〈3 6 1〉 grains were confirmed to exhibit an advantage on grain growth compared to γ-fiber grains. As a result, significant {1 1 3} 〈3 6 1〉 texture was developed and unfavorable γ-fiber texture was inhibited in the final annealed sheet. Both these aspects led to superior magnetic properties in the sample with largest initial grain size. The magnetic induction B8 was 1.36 T and the high frequency core loss P10/400 was 17.07 W/kg.
High-resolution Imaging of PHIBSS z ˜ 2 Main-sequence Galaxies in CO J = 1 → 0
NASA Astrophysics Data System (ADS)
Bolatto, A. D.; Warren, S. R.; Leroy, A. K.; Tacconi, L. J.; Bouché, N.; Förster Schreiber, N. M.; Genzel, R.; Cooper, M. C.; Fisher, D. B.; Combes, F.; García-Burillo, S.; Burkert, A.; Bournaud, F.; Weiss, A.; Saintonge, A.; Wuyts, S.; Sternberg, A.
2015-08-01
We present Karl Jansky Very Large Array observations of the CO J=1-0 transition in a sample of four z˜ 2 main-sequence galaxies. These galaxies are in the blue sequence of star-forming galaxies at their redshift, and are part of the IRAM Plateau de Bure HIgh-z Blue Sequence Survey which imaged them in CO J=3-2. Two galaxies are imaged here at high signal-to-noise, allowing determinations of their disk sizes, line profiles, molecular surface densities, and excitation. Using these and published measurements, we show that the CO and optical disks have similar sizes in main-sequence galaxies, and in the galaxy where we can compare CO J=1-0 and J=3-2 sizes we find these are also very similar. Assuming a Galactic CO-to-H2 conversion, we measure surface densities of {{{Σ }}}{mol}˜ 1200 {M}⊙ pc-2 in projection and estimate {{{Σ }}}{mol}˜ 500-900 {M}⊙ pc-2 deprojected. Finally, our data yields velocity-integrated Rayleigh-Jeans brightness temperature line ratios r31 that are approximately at unity. In addition to the similar disk sizes, the very similar line profiles in J=1-0 and J=3-2 indicate that both transitions sample the same kinematics, implying that their emission is coextensive. We conclude that in these two main-sequence galaxies there is no evidence for significant excitation gradients or a large molecular reservoir that is diffuse or cold and not involved in active star formation. We suggest that r31 in very actively star-forming galaxies is likely an indicator of how well-mixed the star formation activity and the molecular reservoir are.
Moderating the Covariance Between Family Member’s Substance Use Behavior
Eaves, Lindon J.; Neale, Michael C.
2014-01-01
Twin and family studies implicitly assume that the covariation between family members remains constant across differences in age between the members of the family. However, age-specificity in gene expression for shared environmental factors could generate higher correlations between family members who are more similar in age. Cohort effects (cohort × genotype or cohort × common environment) could have the same effects, and both potentially reduce effect sizes estimated in genome-wide association studies where the subjects are heterogeneous in age. In this paper we describe a model in which the covariance between twins and non-twin siblings is moderated as a function of age difference. We describe the details of the model and simulate data using a variety of different parameter values to demonstrate that model fitting returns unbiased parameter estimates. Power analyses are then conducted to estimate the sample sizes required to detect the effects of moderation in a design of twins and siblings. Finally, the model is applied to data on cigarette smoking. We find that (1) the model effectively recovers the simulated parameters, (2) the power is relatively low and therefore requires large sample sizes before small to moderate effect sizes can be found reliably, and (3) the genetic covariance between siblings for smoking behavior decays very rapidly. Result 3 implies that, e.g., genome-wide studies of smoking behavior that use individuals assessed at different ages, or belonging to different birth-year cohorts may have had substantially reduced power to detect effects of genotype on cigarette use. It also implies that significant special twin environmental effects can be explained by age-moderation in some cases. This effect likely contributes to the missing heritability paradox. PMID:24647834
Experimental analysis of surface finish in normal conducting cavities
NASA Astrophysics Data System (ADS)
Zarrebini-Esfahani, A.; Aslaninejad, M.; Ristic, M.; Long, K.
2017-10-01
A normal conducting 805 MHz test cavity with an in built button shaped sample is used to conduct a series of surface treatment experiments. The button enhances the local fields and influences the likelihood of an RF breakdown event. Because of their smaller sizes, compared to the whole cavity surface, they allow practical investigations of the effects of cavity surface preparation in relation to RF breakdown. Manufacturing techniques and steps for preparing the buttons to improve the surface quality are described in detail. It was observed that even after the final stage of the surface treatment, defects on the surface of the cavities still could be found.
Investigating the Use of Ultrasound for Evaluating Aging Wiring Insulation
NASA Technical Reports Server (NTRS)
Madaras, Eric I.; Anastasi, Robert F.
2001-01-01
This paper reviews our initial efforts to investigate the use of ultrasound to evaluate wire insulation. Our initial model was a solid conductor with heat shrink tubing applied. In this model, various wave modes were identified. Subsequently, several aviation classes of wires (MIL-W- 81381, MIL-W-22759/34, and MIL-W-22759/87) were measured. The wires represented polyimide and ethylene-tetraflouroethylene insulations, and combinations of polyimide and flouropolymer plastics. Wire gages of 12, 16, and 20 AWG sizes were measured. Finally, samples of these wires were subjected to high temperatures for short periods of time to cause the insulation to degrade. Subsequent measurements indicated easily detectable changes.
Mineralogical, chemical and toxicological characterization of urban air particles.
Čupr, Pavel; Flegrová, Zuzana; Franců, Juraj; Landlová, Linda; Klánová, Jana
2013-04-01
Systematic characterization of morphological, mineralogical, chemical and toxicological properties of various size fractions of the atmospheric particulate matter was a main focus of this study together with an assessment of the human health risks they pose. Even though near-ground atmospheric aerosols have been a subject of intensive research in recent years, data integrating chemical composition of particles and health risks are still scarce and the particle size aspect has not been properly addressed yet. Filling this gap, however, is necessary for reliable risk assessment. A high volume ambient air sampler equipped with a multi-stage cascade impactor was used for size specific particle collection, and all 6 fractions were a subject of detailed characterization of chemical (PAHs) and mineralogical composition of the particles, their mass size distribution and genotoxic potential of organic extracts. Finally, the risk level for inhalation exposure associated to the carcinogenic character of the studied PAHs has been assessed. The finest fraction (<0.45 μm) exhibited the highest mass, highest active surface, highest amount of associated PAHs and also highest direct and indirect genotoxic potentials in our model air sample. Risk assessment of inhalation scenario indicates the significant cancer risk values in PM 1.5 size fraction. This presented new approach proved to be a useful tool for human health risk assessment in the areas with significant levels of air dust concentration. Copyright © 2013 Elsevier Ltd. All rights reserved.
Predominance of single bacterial cells in composting bioaerosols
NASA Astrophysics Data System (ADS)
Galès, Amandine; Bru-Adan, Valérie; Godon, Jean-Jacques; Delabre, Karine; Catala, Philippe; Ponthieux, Arnaud; Chevallier, Michel; Birot, Emmanuel; Steyer, Jean-Philippe; Wéry, Nathalie
2015-04-01
Bioaerosols emitted from composting plants have become an issue because of their potential harmful impact on public or workers' health. Accurate knowledge of the particle-size distribution in bioaerosols emitted from open-air composting facilities during operational activity is a requirement for improved modeling of air dispersal. In order to investigate the aerodynamic diameter of bacteria in composting bioaerosols this study used an Electrical Low Pressure Impactor for sampling and quantitative real-time PCR for quantification. Quantitative PCR results show that the size of bacteria peaked between 0.95 μm and 2.4 μm and that the geometric mean diameter of the bacteria was 1.3 μm. In addition, total microbial cells were counted by flow cytometry and revealed that these qPCR results corresponded to single whole bacteria. Finally, the enumeration of cultivable thermophilic microorganisms allowed us to set the upper size limit for fragments at an aerodynamic diameter of ∼0.3 μm. Particle-size distributions of microbial groups previously used to monitor composting bioaerosols were also investigated. In collected the bioaerosols, the aerodynamic diameter of the actinomycetes Saccharopolyspora rectivirgula-and-relatives and also of the fungus Aspergillus fumigatus, appeared to be consistent with a majority of individual cells. Together, this study provides the first culture-independent data on particle-size distribution of composting bioaerosols and reveals that airborne single bacteria were emitted predominantly from open-air composting facilities.
The Interrupted Power Law and the Size of Shadow Banking
Fiaschi, Davide; Kondor, Imre; Marsili, Matteo; Volpati, Valerio
2014-01-01
Using public data (Forbes Global 2000) we show that the asset sizes for the largest global firms follow a Pareto distribution in an intermediate range, that is “interrupted” by a sharp cut-off in its upper tail, where it is totally dominated by financial firms. This flattening of the distribution contrasts with a large body of empirical literature which finds a Pareto distribution for firm sizes both across countries and over time. Pareto distributions are generally traced back to a mechanism of proportional random growth, based on a regime of constant returns to scale. This makes our findings of an “interrupted” Pareto distribution all the more puzzling, because we provide evidence that financial firms in our sample should operate in such a regime. We claim that the missing mass from the upper tail of the asset size distribution is a consequence of shadow banking activity and that it provides an (upper) estimate of the size of the shadow banking system. This estimate–which we propose as a shadow banking index–compares well with estimates of the Financial Stability Board until 2009, but it shows a sharper rise in shadow banking activity after 2010. Finally, we propose a proportional random growth model that reproduces the observed distribution, thereby providing a quantitative estimate of the intensity of shadow banking activity. PMID:24728096
NASA Astrophysics Data System (ADS)
Wiedensohler, A.; Birmili, W.; Nowak, A.; Sonntag, A.; Weinhold, K.; Merkel, M.; Wehner, B.; Tuch, T.; Pfeifer, S.; Fiebig, M.; Fjäraa, A. M.; Asmi, E.; Sellegri, K.; Depuy, R.; Venzac, H.; Villani, P.; Laj, P.; Aalto, P.; Ogren, J. A.; Swietlicki, E.; Roldin, P.; Williams, P.; Quincey, P.; Hüglin, C.; Fierz-Schmidhauser, R.; Gysel, M.; Weingartner, E.; Riccobono, F.; Santos, S.; Grüning, C.; Faloon, K.; Beddows, D.; Harrison, R. M.; Monahan, C.; Jennings, S. G.; O'Dowd, C. D.; Marinoni, A.; Horn, H.-G.; Keck, L.; Jiang, J.; Scheckman, J.; McMurry, P. H.; Deng, Z.; Zhao, C. S.; Moerman, M.; Henzing, B.; de Leeuw, G.
2010-12-01
Particle mobility size spectrometers often referred to as DMPS (Differential Mobility Particle Sizers) or SMPS (Scanning Mobility Particle Sizers) have found a wide application in atmospheric aerosol research. However, comparability of measurements conducted world-wide is hampered by lack of generally accepted technical standards with respect to the instrumental set-up, measurement mode, data evaluation as well as quality control. This article results from several instrument intercomparison workshops conducted within the European infrastructure project EUSAAR (European Supersites for Atmospheric Aerosol Research). Under controlled laboratory conditions, the number size distribution from 20 to 200 nm determined by mobility size spectrometers of different design are within an uncertainty range of ±10% after correcting internal particle losses, while below and above this size range the discrepancies increased. Instruments with identical design agreed within ±3% in the peak number concentration when all settings were done carefully. Technical standards were developed for a minimum requirement of mobility size spectrometry for atmospheric aerosol measurements. Technical recommendations are given for atmospheric measurements including continuous monitoring of flow rates, temperature, pressure, and relative humidity for the sheath and sample air in the differential mobility analyser. In cooperation with EMEP (European Monitoring and Evaluation Program), a new uniform data structure was introduced for saving and disseminating the data within EMEP. This structure contains three levels: raw data, processed data, and final particle size distributions. Importantly, we recommend reporting raw measurements including all relevant instrument parameters as well as a complete documentation on all data transformation and correction steps. These technical and data structure standards aim to enhance the quality of long-term size distribution measurements, their comparability between different networks and sites, and their transparency and traceability back to raw data.
Cutts, Felicity T; Izurieta, Hector S; Rhoda, Dale A
2013-01-01
Vaccination coverage is an important public health indicator that is measured using administrative reports and/or surveys. The measurement of vaccination coverage in low- and middle-income countries using surveys is susceptible to numerous challenges. These challenges include selection bias and information bias, which cannot be solved by increasing the sample size, and the precision of the coverage estimate, which is determined by the survey sample size and sampling method. Selection bias can result from an inaccurate sampling frame or inappropriate field procedures and, since populations likely to be missed in a vaccination coverage survey are also likely to be missed by vaccination teams, most often inflates coverage estimates. Importantly, the large multi-purpose household surveys that are often used to measure vaccination coverage have invested substantial effort to reduce selection bias. Information bias occurs when a child's vaccination status is misclassified due to mistakes on his or her vaccination record, in data transcription, in the way survey questions are presented, or in the guardian's recall of vaccination for children without a written record. There has been substantial reliance on the guardian's recall in recent surveys, and, worryingly, information bias may become more likely in the future as immunization schedules become more complex and variable. Finally, some surveys assess immunity directly using serological assays. Sero-surveys are important for assessing public health risk, but currently are unable to validate coverage estimates directly. To improve vaccination coverage estimates based on surveys, we recommend that recording tools and practices should be improved and that surveys should incorporate best practices for design, implementation, and analysis.
Study of the structure of concrete with C-14-PMMA method
NASA Astrophysics Data System (ADS)
Muuri, E.; Tikkanen, O.; Ikonen, J.; Siitari-Kauppi, M.; Autio, M.
2017-12-01
Cement is used widely in the construction industry and, additionally, in the waste management industry for the stabilization of hazardous materials because of its capacity for both physical and chemical immobilization of contaminants. Cementitious materials have also been suggested as the backfilling materials, for example, in deep geological repositories for the final disposal of spent nuclear fuel. As a result, it is necessary to study the structure of the materials in different conditions. In this study, the structure of concrete was studied with the polymethylmetacrylate (PMMA) method in samples from the construction industry. The spatial distribution of porosity was characterized using this autoradiography method that involves the impregnation of a dried rock sample of hand specimen size with 14C-labelled methyl methacrylate (MMA) in vacuum, thermally initiated polymerization, film and digital autoradiography, and porosity calculation routines relying on digital image processing techniques [1]. Three main components are clearly visible on the PMMA autoradiographs of the studied concrete samples, because of their contrasted porosity (Fig 1.). Ground matrix cement shows even porosity; 27.0±4.7 %. The other two phases are mineral grains and bubbles, which are classified into four categories against their size and quantity. Fig 1. The scanned surface of the concrete sample (left) and the corresponding autoradiograph (right), where the darkest areas are caused by larger activity, and thus, larger porosity. The exposure time used for the autoradiogram was three days. J. Sammaljärvi, L. Jokelainen, J. Ikonen, M. Siitari-Kauppi, Eng. Geol. 135-136, 52-59 (2012).
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Estimation of sample size and testing power (Part 4).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-01-01
Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Khatkar, Mehar S; Nicholas, Frank W; Collins, Andrew R; Zenger, Kyall R; Cavanagh, Julie A L; Barris, Wes; Schnabel, Robert D; Taylor, Jeremy F; Raadsma, Herman W
2008-04-24
The extent of linkage disequilibrium (LD) within a population determines the number of markers that will be required for successful association mapping and marker-assisted selection. Most studies on LD in cattle reported to date are based on microsatellite markers or small numbers of single nucleotide polymorphisms (SNPs) covering one or only a few chromosomes. This is the first comprehensive study on the extent of LD in cattle by analyzing data on 1,546 Holstein-Friesian bulls genotyped for 15,036 SNP markers covering all regions of all autosomes. Furthermore, most studies in cattle have used relatively small sample sizes and, consequently, may have had biased estimates of measures commonly used to describe LD. We examine minimum sample sizes required to estimate LD without bias and loss in accuracy. Finally, relatively little information is available on comparative LD structures including other mammalian species such as human and mouse, and we compare LD structure in cattle with public-domain data from both human and mouse. We computed three LD estimates, D', Dvol and r2, for 1,566,890 syntenic SNP pairs and a sample of 365,400 non-syntenic pairs. Mean D' is 0.189 among syntenic SNPs, and 0.105 among non-syntenic SNPs; mean r2 is 0.024 among syntenic SNPs and 0.0032 among non-syntenic SNPs. All three measures of LD for syntenic pairs decline with distance; the decline is much steeper for r2 than for D' and Dvol. The value of D' and Dvol are quite similar. Significant LD in cattle extends to 40 kb (when estimated as r2) and 8.2 Mb (when estimated as D'). The mean values for LD at large physical distances are close to those for non-syntenic SNPs. Minor allelic frequency threshold affects the distribution and extent of LD. For unbiased and accurate estimates of LD across marker intervals spanning < 1 kb to > 50 Mb, minimum sample sizes of 400 (for D') and 75 (for r2) are required. The bias due to small samples sizes increases with inter-marker interval. LD in cattle is much less extensive than in a mouse population created from crossing inbred lines, and more extensive than in humans. For association mapping in Holstein-Friesian cattle, for a given design, at least one SNP is required for each 40 kb, giving a total requirement of at least 75,000 SNPs for a low power whole-genome scan (median r2 > 0.19) and up to 300,000 markers at 10 kb intervals for a high power genome scan (median r2 > 0.62). For estimation of LD by D' and Dvol with sufficient precision, a sample size of at least 400 is required, whereas for r2 a minimum sample of 75 is adequate.
Advantages of Unfair Quantum Ground-State Sampling.
Zhang, Brian Hu; Wagenbreth, Gene; Martin-Mayor, Victor; Hen, Itay
2017-04-21
The debate around the potential superiority of quantum annealers over their classical counterparts has been ongoing since the inception of the field. Recent technological breakthroughs, which have led to the manufacture of experimental prototypes of quantum annealing optimizers with sizes approaching the practical regime, have reignited this discussion. However, the demonstration of quantum annealing speedups remains to this day an elusive albeit coveted goal. We examine the power of quantum annealers to provide a different type of quantum enhancement of practical relevance, namely, their ability to serve as useful samplers from the ground-state manifolds of combinatorial optimization problems. We study, both numerically by simulating stoquastic and non-stoquastic quantum annealing processes, and experimentally, using a prototypical quantum annealing processor, the ability of quantum annealers to sample the ground-states of spin glasses differently than thermal samplers. We demonstrate that (i) quantum annealers sample the ground-state manifolds of spin glasses very differently than thermal optimizers (ii) the nature of the quantum fluctuations driving the annealing process has a decisive effect on the final distribution, and (iii) the experimental quantum annealer samples ground-state manifolds significantly differently than thermal and ideal quantum annealers. We illustrate how quantum annealers may serve as powerful tools when complementing standard sampling algorithms.
Osmani, Feroz A; Thakkar, Savyasachi; Ramme, Austin; Elbuluk, Ameer; Wojack, Paul; Vigdorchik, Jonathan M
2017-12-01
Preoperative total hip arthroplasty templating can be performed with radiographs using acetate prints, digital viewing software, or with computed tomography (CT) images. Our hypothesis is that 3D templating is more precise and accurate with cup size prediction as compared to 2D templating with acetate prints and digital templating software. Data collected from 45 patients undergoing robotic-assisted total hip arthroplasty compared cup sizes templated on acetate prints and OrthoView software to MAKOplasty software that uses CT scan. Kappa analysis determined strength of agreement between each templating modality and the final size used. t tests compared mean cup-size variance from the final size for each templating technique. Interclass correlation coefficient (ICC) determined reliability of digital and acetate planning by comparing predictions of the operating surgeon and a blinded adult reconstructive fellow. The Kappa values for CT-guided, digital, and acetate templating with the final size was 0.974, 0.233, and 0.262, respectively. Both digital and acetate templating significantly overpredicted cup size, compared to CT-guided methods ( P < .001). There was no significant difference between digital and acetate templating ( P = .117). Interclass correlation coefficient value for digital and acetate templating was 0.928 and 0.931, respectively. CT-guided planning more accurately predicts hip implant cup size when compared to the significant overpredictions of digital and acetate templating. CT-guided templating may also lead to better outcomes due to bone stock preservation from a smaller and more accurate cup size predicted than that of digital and acetate predictions.
Sample Size Determination for One- and Two-Sample Trimmed Mean Tests
ERIC Educational Resources Information Center
Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng
2008-01-01
Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…
The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.
Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S
2016-10-01
The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Milling of rice grains: effects of starch/flour structures on gelatinization and pasting properties.
Hasjim, Jovin; Li, Enpeng; Dhital, Sushil
2013-01-30
Starch gelatinization and flour pasting properties were determined and correlated with four different levels of starch structures in rice flour, i.e. flour particle size, degree of damaged starch granules, whole molecular size, and molecular branching structure. Onset starch-gelatinization temperatures were not significantly different among all flour samples, but peak and conclusion starch-gelatinization temperatures were significantly different and were strongly correlated with the flour particle size, indicating that rice flour with larger particle size has a greater barrier for heat transfer. There were slight differences in the enthalpy of starch gelatinization, which are likely associated with the disruption of crystalline structure in starch granules by the milling processes. Flours with volume-median diameter ≥56 μm did not show a defined peak viscosity in the RVA viscogram, possibly due to the presence of native protein and/or cell-wall structure stabilizing the swollen starch granules against the rupture caused by shear during heating. Furthermore, RVA final viscosity of flour was strongly correlated with the degree of damage to starch granules, suggesting the contribution of granular structure, possibly in swollen form. The results from this study allow the improvement in the manufacture and the selection criteria of rice flour with desirable gelatinization and pasting properties. Copyright © 2012 Elsevier Ltd. All rights reserved.
Alexander, Jeffrey A; Maeng, Daniel; Casalino, Lawrence P; Rittenhouse, Diane
2013-04-01
To examine the effect of public reporting (PR) and financial incentives tied to quality performance on the use of care management practices (CMPs) among small- and medium-sized physician groups. Survey data from The National Study of Small and Medium-sized Physician Practices were used. Primary data collection was also conducted to assess community-level PR activities. The final sample included 643 practices engaged in quality reporting; about half of these practices were subject to PR. We used a treatment effects model. The instrumental variables were the community-level variables that capture the level of PR activity in each community in which the practices operate. (1) PR is associated with increased use of CMPs, but the estimate is not statistically significant; (2) financial incentives are associated with greater use of CMPs; (3) practices' awareness/sensitivity to quality reports is positively related to their use of CMPs; and (4) combined PR and financial incentives jointly affect CMP use to a greater degree than either of these factors alone. Small- to medium-sized practices appear to respond to PR and financial incentives by greater use of CMPs. Future research needs to investigate the appropriate mix and type of incentive arrangements and quality reporting. © Health Research and Educational Trust.
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
Microstructure of Hot Rolled 1.0C-1.5Cr Bearing Steel and Subsequent Spheroidization Annealing
NASA Astrophysics Data System (ADS)
Li, Zhen-Xing; Li, Chang-Sheng; Zhang, Jian; Li, Bin-Zhou; Pang, Xue-Dong
2016-07-01
The effect of final rolling temperature and cooling process on the microstructure of 1.0C-1.5Cr bearing steel was studied, and the relationship between the microstructure parameters and subsequent spheroidization annealing was analyzed. The results indicate that the increase of water-cooling rate after hot rolling and the decrease of final cooling temperature are beneficial to reducing both the pearlite interlamellar spacing and pearlite colony size. Prior austenite grain size can be reduced by decreasing the final rolling temperature and increasing the water-cooling rate. When the final rolling temperature was controlled around 1103 K (830 °C), the subsequent cooling rate was set to 10 K/s and final cooling temperature was 953 K (680 °C), the precipitation of grain boundary cementite was suppressed effectively and lots of rod-like cementite particles were observed in the microstructure. Interrupted quenching was employed to study the dissolution behavior of cementite during the austenitizing at 1073 K (800 °C). The decrease of both pearlite interlamellar spacing and pearlite colony size could facilitate the initial dissolution and fragmentation of cementite lamellae, which could shorten the spheroidization time. The fragmentation of grain boundary cementite tends to form large-size undissolved cementite particles. With the increase of austenitizing time from 20 to 300 minutes, mean diameter of undissolved cementite particles increases, indicating the cementite particle coarsening and cementite dissolution occuring simultaneously. Mean diameter of cementite particles in the final spheroidized microstructure is proportional to the mean diameter of undissolved cementite particles formed during partial austenitizing.
76 FR 56141 - Notice of Intent To Request New Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... level surveys of similar scope and size. The sample for each selected community will be strategically... of 2 hours per sample community. Full Study: The maximum sample size for the full study is 2,812... questionnaires. The initial sample size for this phase of the research is 100 respondents (10 respondents per...
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
ERIC Educational Resources Information Center
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
Performance of terahertz metamaterials as high-sensitivity sensor
NASA Astrophysics Data System (ADS)
He, Yanan; Zhang, Bo; Shen, Jingling
2017-09-01
A high-sensitivity sensor based on the resonant transmission characteristics of terahertz (THz) metamaterials was investigated, with the proposal and fabrication of rectangular bar arrays of THz metamaterials exhibiting a period of 180 μm on a 25 μm thick flexible polyimide. Varying the size of the metamaterial structure revealed that the length of the rectangular unit modulated the resonant frequency, which was verified by both experiment and simulation. The sensing characteristics upon varying the surrounding media in the sample were tested by simulation and experiment. Changing the surrounding medium from that of air to that of alcohol or oil produced resonant frequency redshifts of 80 GHz or 150 GHz, respectively, which indicates that the sensor possessed a high sensitivity of 667 GHz per unit of refractive index. Finally, the influence of the sample substrate thickness on the sensor sensitivity was investigated by simulation. It may be a reference for future sensor design.
Theofilou, Paraskevi; Togas, Constantinos; Vasilopoulou, Chrysoula; Minos, Christos; Zyga, Sofia; Tzitzikos, Giorgos
2015-04-13
There is clear evidence of a link between dialysis adequacy (as measured by urea kinetic modeling or urea reduction ratio) and such important clinical outcomes as morbidity and mortality. Evidence regarding the relationship between dialysis adequacy and quality of life (QOL) outcomes as well as adherence is less clear. The present paper is a study protocol which is planning to answer the following research question: what is the impact of dialysis adequacy on QOL and adherence in a sample of hemodialysis patients? The final sample size will be around 100 patients undergoing hemodialysis. Each subject's QOL and adherence will be measured using the following instruments: i) the Missoula-VITAS quality of life index 25; ii) the multidimensional scale of perceived social support and iii) the simplified medication adherence questionnaire. Dialysis adequacy is expected to be related to QOL and adherence scores.
Confocal multispot microscope for fast and deep imaging in semicleared tissues
NASA Astrophysics Data System (ADS)
Adam, Marie-Pierre; Müllenbroich, Marie Caroline; Di Giovanna, Antonino Paolo; Alfieri, Domenico; Silvestri, Ludovico; Sacconi, Leonardo; Pavone, Francesco Saverio
2018-02-01
Although perfectly transparent specimens are imaged faster with light-sheet microscopy, less transparent samples are often imaged with two-photon microscopy leveraging its robustness to scattering; however, at the price of increased acquisition times. Clearing methods that are capable of rendering strongly scattering samples such as brain tissue perfectly transparent specimens are often complex, costly, and time intensive, even though for many applications a slightly lower level of tissue transparency is sufficient and easily achieved with simpler and faster methods. Here, we present a microscope type that has been geared toward the imaging of semicleared tissue by combining multispot two-photon excitation with rolling shutter wide-field detection to image deep and fast inside semicleared mouse brain. We present a theoretical and experimental evaluation of the point spread function and contrast as a function of shutter size. Finally, we demonstrate microscope performance in fixed brain slices by imaging dendritic spines up to 400-μm deep.
Compact energy dispersive X-ray microdiffractometer for diagnosis of neoplastic tissues
NASA Astrophysics Data System (ADS)
Sosa, C.; Malezan, A.; Poletti, M. E.; Perez, R. D.
2017-08-01
An energy dispersive X-ray microdiffractometer with capillary optics has been developed for characterizing breast cancer. The employment of low divergence capillary optics helps to reduce the setup size to a few centimeters, while providing a lateral spatial resolution of 100 μm. The system angular calibration and momentum transfer resolution were assessed by a detailed study of a polycrystalline reference material. The performance of the system was tested by means of the analysis of tissue-equivalent samples previously characterized by conventional X-ray diffraction. In addition, a simplified correction model for an appropriate comparison of the diffraction spectra was developed and validated. Finally, the system was employed to evaluate normal and neoplastic human breast samples, in order to determine their X-ray scatter signatures. The initial results indicate that the use of this compact energy dispersive X-ray microdiffractometer combined with a simplified correction procedure is able to provide additional information to breast cancer diagnosis.
Lavado Contador, J F; Maneta, M; Schnabel, S
2006-10-01
The capability of Artificial Neural Network models to forecast near-surface soil moisture at fine spatial scale resolution has been tested for a 99.5 ha watershed located in SW Spain using several easy to achieve digital models of topographic and land cover variables as inputs and a series of soil moisture measurements as training data set. The study methods were designed in order to determining the potentials of the neural network model as a tool to gain insight into soil moisture distribution factors and also in order to optimize the data sampling scheme finding the optimum size of the training data set. Results suggest the efficiency of the methods in forecasting soil moisture, as a tool to assess the optimum number of field samples, and the importance of the variables selected in explaining the final map obtained.
Choosing an Optimal Database for Protein Identification from Tandem Mass Spectrometry Data.
Kumar, Dhirendra; Yadav, Amit Kumar; Dash, Debasis
2017-01-01
Database searching is the preferred method for protein identification from digital spectra of mass to charge ratios (m/z) detected for protein samples through mass spectrometers. The search database is one of the major influencing factors in discovering proteins present in the sample and thus in deriving biological conclusions. In most cases the choice of search database is arbitrary. Here we describe common search databases used in proteomic studies and their impact on final list of identified proteins. We also elaborate upon factors like composition and size of the search database that can influence the protein identification process. In conclusion, we suggest that choice of the database depends on the type of inferences to be derived from proteomics data. However, making additional efforts to build a compact and concise database for a targeted question should generally be rewarding in achieving confident protein identifications.
[Practical aspects regarding sample size in clinical research].
Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S
1996-01-01
The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.
Breaking Free of Sample Size Dogma to Perform Innovative Translational Research
Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.
2011-01-01
Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197
What is the optimum sample size for the study of peatland testate amoeba assemblages?
Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J
2017-10-01
Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods
2016-11-01
ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample
ERIC Educational Resources Information Center
Shieh, Gwowen
2013-01-01
The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…
Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach
NASA Technical Reports Server (NTRS)
Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.
77 FR 39385 - Receipts-Based, Small Business Size Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-03
... Business Size Standard AGENCY: Nuclear Regulatory Commission. ACTION: Direct final rule. SUMMARY: The U.S.... The NRC is increasing its receipts-based, small business size standard from $6.5 million to $7 million to conform to the standard set by the Small Business Administration (SBA). This size standard...
NASA Astrophysics Data System (ADS)
Duncan, D.; Davis, M. B.; Allison, M. A.; Gulick, S. P.; Goff, J. A.; Saustrup, S.
2012-12-01
The University of Texas Institute for Geophysics, part of the Jackson School of Geosciences, annually offers an intensive three-week marine geology and geophysics field course during the spring-summer intersession. Now in year six, the course provides hands-on instruction and training for graduate and upper-level undergraduate students in data acquisition, processing, interpretation, and visualization. Techniques covered include high-resolution seismic reflection, CHIRP sub-bottom profiling, multibeam bathymetry, sidescan sonar, several types of sediment coring, grab sampling, and the sedimentology of resulting seabed samples (e.g., core description, grain size analysis, x-radiography, etc.). Students participate in an initial period of classroom instruction designed to communicate geological context of the field area (which changes each year) along with theoretical and technical background on each field method. The class then travels to the Gulf Coast for a week of at-sea field work. Our field sites at Port Aransas and Galveston, Texas, and Grand Isle, Louisiana, have provided ideal locations for students to investigate coastal and sedimentary processes of the Gulf Coast and continental shelf through application of geophysical techniques. In the field, students rotate between two research vessels: one vessel, the 22' aluminum-hulled R/V Lake Itasca, owned and operated by UTIG, is used principally for multibeam bathymetry, sidescan sonar, and sediment sampling; the other, NOAA's R/V Manta or the R/V Acadiana, operated by the Louisiana Universities Marine Consortium, and is used primarily for high-resolution seismic reflection, CHIRP sub-bottom profiling, multibeam bathymetry, gravity coring, and vibrocoring. While at sea, students assist with survey design, learn instrumentation set up, acquisition parameters, data quality control, and safe instrument deployment and retrieval. In teams of three, students work in onshore field labs preparing sediment samples for particle size analysis and initial post-processing of geophysical data. During the course's final week, teams return to the classroom where they integrate, interpret, and visualize data in a final project using industry-standard software such as Focus, Landmark, Caris, and Fledermaus. The course concludes with a series of professional-level final presentations and discussions with academic and industry supporters in which students examine the geologic history and sedimentary processes of the studied area of the Gulf Coast continental shelf. After completion, students report a greater understanding of marine geology and geophysics through the course's intensive, hands-on, team approach and low instructor to student ratio (12 students, three faculty, and three teaching assistants). This course satisfies field experience requirements for some degree programs and thus provides a unique alternative to land-based field courses.
Electrical and magnetic properties of nano-sized magnesium ferrite
NASA Astrophysics Data System (ADS)
T, Smitha; X, Sheena; J, Binu P.; Mohammed, E. M.
2015-02-01
Nano-sized magnesium ferrite was synthesized using sol-gel techniques. Structural characterization was done using X-ray diffractometer and Fourier Transform Infrared Spectrometer. Vibration Sample Magnetometer was used to record the magnetic measurements. XRD analysis reveals the prepared sample is single phasic without any impurity. Particle size calculation shows the average crystallite size of the sample is 19nm. FTIR analysis confirmed spinel structure of the prepared samples. Magnetic measurement study shows that the sample is ferromagnetic with high degree of isotropy. Hysterisis loop was traced at temperatures 100K and 300K. DC electrical resistivity measurements show semiconducting nature of the sample.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan
2017-01-01
Abstract Background Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. Methods We used the previously described “incidence decay with exponential adjustment” (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015–2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. Results The 2015–2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R0 approximately 1.4 for all fits). Lower R0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. Conclusions A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance. PMID:29497629
Sample Size in Qualitative Interview Studies: Guided by Information Power.
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit
2015-11-27
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.
Ghotbi, Adam Ali; Kjaer, Andreas; Nepper-Christensen, Lars; Ahtarovski, Kiril Aleksov; Lønborg, Jacob Thomsen; Vejlstrup, Niels; Kyhl, Kasper; Christensen, Thomas Emil; Engstrøm, Thomas; Kelbæk, Henning; Holmvang, Lene; Bang, Lia E; Ripa, Rasmus Sejersten; Hasbak, Philip
2018-06-01
Determining infarct size and myocardial salvage in patients with ST-segment elevation myocardial infarction (STEMI) is important when assessing the efficacy of new reperfusion strategies. We investigated whether rest 82 Rb-PET myocardial perfusion imaging can estimate area at risk, final infarct size, and myocardial salvage index when compared to cardiac SPECT and magnetic resonance (CMR). Twelve STEMI patients were injected with 99m Tc-Sestamibi intravenously immediate prior to reperfusion. SPECT, 82 Rb-PET, and CMR imaging were performed post-reperfusion and at a 3-month follow-up. An automated algorithm determined area at risk, final infarct size, and hence myocardial salvage index. SPECT, CMR, and PET were performed 2.2 ± 0.5, 34 ± 8.5, and 32 ± 24.4 h after reperfusion, respectively. Mean (± SD) area at risk were 35.2 ± 16.6%, 34.7 ± 11.3%, and 28.1 ± 16.1% of the left ventricle (LV) in SPECT, CMR, and PET, respectively, P = 0.04 for difference. Mean final infarct size estimates were 12.3 ± 15.4%, 13.7 ± 10.4%, and 11.9 ± 14.6% of the LV in SPECT, CMR, and PET imaging, respectively, P = .72. Myocardial salvage indices were 0.64 ± 0.33 (SPECT), 0.65 ± 0.20 (CMR), and 0.63 ± 0.28 (PET), (P = .78). 82 Rb-PET underestimates area at risk in patients with STEMI when compared to SPECT and CMR. However, our findings suggest that PET imaging seems feasible when assessing the clinical important parameters of final infarct size and myocardial salvage index, although with great variability, in a selected STEMI population with large infarcts. These findings should be confirmed in a larger population.
NASA Astrophysics Data System (ADS)
Amerioun, M. H.; Ghazi, M. E.; Izadifard, M.
2018-03-01
In this work, first the CuInS2 (CIS2) layers are deposited on Aluminum and polyethylene terephthalate (PET) as flexible substrates, and on glass and soda lime glass (SLG) as rigid substrates by the sol-gel method. Then the samples are analyzed by x-ray diffractomery (XRD) and atomic force microscope (AFM) to investigate the crystal structures and surface roughness of the samples. The I-V curve measurements and Seebeck effect setup are used to measure the electrical properties of the samples. The XRD data obtained for the CIS2 layers show that all the prepared samples have a single phase with a preferred orientation that is substrate-dependent. The samples grown on the rigid substrates had higher crystallite sizes. The results obtained for the optical measurements indicate the dependence of the band gap energy on the substrate type. The measured Seebeck coefficient showed that the carriers were of p-type in all the samples. According to the AFM images, the surface roughness also varied in the CIS2 layers with different substrates. In this regard, the type of substrate could be an important parameter for the final performance of the fabricated CIS2 cells.
Understanding fluid transport through the multiscale pore network of a natural shale
NASA Astrophysics Data System (ADS)
Davy, Catherine A.; Nguyen Kim, Thang; Song, Yang; Troadec, David; Blanchenet, Anne-Marie; Adler, Pierre M.
2017-06-01
The pore structure of a natural shale is obtained by three imaging means. Micro-tomography results are extended to provide the spatial arrangement of the minerals and pores present at a voxel size of 700 nm (the macroscopic scale). FIB/SEM provides a 3D representation of the porous clay matrix on the so-called mesoscopic scale (10-20 nm); a connected pore network, devoid of cracks, is obtained for two samples out of five, while the pore network is connected through cracks for two other samples out of five. Transmission Electron Microscopy (TEM) is used to visualize the pore space with a typical pixel size of less than 1 nm and a porosity ranging from 0.12 to 0.25. On this scale, in the absence of 3D images, the pore structure is reconstructed by using a classical technique, which is based on truncated Gaussian fields. Permeability calculations are performed with the Lattice Boltzmann Method on the nanoscale, on the mesoscale, and on the combination of the two. Upscaling is finally done (by a finite volume approach) on the bigger macroscopic scale. Calculations show that, in the absence of cracks, the contribution of the nanoscale pore structure on the overall permeability is similar to that of the mesoscale. Complementarily, the macroscopic permeability is measured on a centimetric sample with a neutral fluid (ethanol). The upscaled permeability on the macroscopic scale is in good agreement with the experimental results.
NASA Astrophysics Data System (ADS)
Kobayashi, Kurima; Nimura, You-ta; Urushibata, Kimiko; Hayakawa, Kazuo
2018-04-01
We prepared five Nd2Fe14B sintered magnets with similar saturation polarizations (Js) of 1.38-1.43 T and anisotropy fields (Ha) of 6.76-8.52 T, but different grain sizes (DAV) of 3.1-8.4 μm in diameter and obviously different coercivities (μ0Hc) of 0.8-1.6 T. The observed difference in coercivity could not be explained by the Kronmüller equation, because of the similar Ha values and similar chemical compositions and microstructures resulting from similar preparation method except DAV. The Hc values themselves, however, are inversely proportional to DAV. During demagnetization after magnetization in a 5 T pulse field, domain wall motion (DWM) was observed except in the sample with μ0Hc = 1.6 T by using our step method. The DWM was also confirmed by susceptibility measurements using a custom-built vibrating sample magnetometer, and DWM was generated in the reproduced multi-domain regions (RMDR) during demagnetization. The magnitude of DWM as a polarization change in the RMDR was inversely proportional to the coercivities of the samples. Therefore, it should be considered that the propagation of the nucleated region through the grain boundary, which corresponds to the expansion process in previous studies, was different caused by, first, the difference in DAV, and, second, in grain boundary state which was varied by difference in final annealing temperature.
Kamath, S. U.; Pemiah, B.; Rajan, K. S.; Krishnaswamy, S.; Sethuraman, S.; Krishnan, U. M.
2014-01-01
Rasasindura is a mercury-based nanopowder synthesized using natural products through mechanothermal processing. It has been used in the Ayurvedic system of medicine since time immemorial for various therapeutic purposes such as rejuvenation, treatment of syphilis and in genital disorders. Rasasindura is said to be composed of mercury, sulphur and organic moieties derived from the decoction of plant extracts used during its synthesis. There is little scientific understanding of the preparation process so far. Though metallic mercury is incorporated deliberately for therapeutic purposes, it certainly raises toxicity concerns. The lack of gold standards in manufacturing of such drugs leads to a variation in the chemical composition of the final product. The objective of the present study was to assess the physicochemical properties of Rasasindura samples of different batches purchased from different manufacturers and assess the extent of deviation and gauge its impact on human health. Modern characterization techniques were employed to analyze particle size and morphology, surface area, zeta potential, elemental composition, crystallinity, thermal stability and degradation. Average particle size of the samples observed through scanning electron microscope ranged from 5-100 nm. Mercury content was found to be between 84 and 89% from elemental analysis. Despite batch-to-batch and manufacturer-to-manufacturer variations in the physicochemical properties, all the samples contained mercury in the form of HgS. These differences in the physicochemical properties may ultimately impact its biological outcome. PMID:25593382