ERIC Educational Resources Information Center
Guo, Jiin-Huarng; Luh, Wei-Ming
2008-01-01
This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
How accurate is the Pearson r-from-Z approximation? A Monte Carlo simulation study.
Hittner, James B; May, Kim
2012-01-01
The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.
ERIC Educational Resources Information Center
Meyer, J. Patrick; Seaman, Michael A.
2013-01-01
The authors generated exact probability distributions for sample sizes up to 35 in each of three groups ("n" less than or equal to 105) and up to 10 in each of four groups ("n" less than or equal to 40). They compared the exact distributions to the chi-square, gamma, and beta approximations. The beta approximation was best in…
Characterization of the enhancement effect of Na2CO3 on the sulfur capture capacity of limestones.
Laursen, Karin; Kern, Arnt A; Grace, John R; Lim, C Jim
2003-08-15
It has been known for a long time that certain additives (e.g., NaCl, CaCl2, Na2CO3, Fe2O3) can increase the sulfur dioxide capture-capacity of limestones. In a recent study we demonstrated that very small amounts of Na2CO3 can be very beneficial for producing sorbents of very high sorption capacities. This paper explores what contributes to these significant increases. Mercury porosimetry measurements of calcined limestone samples reveal a change in the pore-size from 0.04-0.2 microm in untreated samples to 2-10 microm in samples treated with Na2CO3--a pore-size more favorable for penetration of sulfur into the particles. The change in pore-size facilitates reaction with lime grains throughout the whole particle without rapid plugging of pores, avoiding premature change from a fast chemical reaction to a slow solid-state diffusion controlled process, as seen for untreated samples. Calcination in a thermogravimetric reactor showed that Na2CO3 increased the rate of calcination of CaCO3 to CaO, an effect which was slightly larger at 825 degrees C than at 900 degrees C. Peak broadening analysis of powder X-ray diffraction data of the raw, calcined, and sulfated samples revealed an unaffected calcite size (approximately 125-170 nm) but a significant increase in the crystallite size for lime (approximately 60-90 nm to approximately 250-300 nm) and less for anhydrite (approximately 125-150 nm to approximately 225-250 nm). The increase in the crystallite and pore-size of the treated limestones is attributed to an increase in ionic mobility in the crystal lattice due to formation of vacancies in the crystals when Ca is partly replaced by Na.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1974-01-01
A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.
NASA Astrophysics Data System (ADS)
Berthold, T.; Milbradt, P.; Berkhahn, V.
2018-04-01
This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.
Sample size for post-marketing safety studies based on historical controls.
Wu, Yu-te; Makuch, Robert W
2010-08-01
As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.
Page, G P; Amos, C I; Boerwinkle, E
1998-04-01
We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.
Waschbusch, Robert J.; Selbig, W.R.; Bannerman, Roger T.
1999-01-01
Street-dirt samples were collected using industrial vacuum equipment. Leaves in these samples were separated out and the remaining sediment was sieved into >250 mm, 250-63 mm, 63-25 mm, <25 mm size fractions and were analyzed for total phosphorus. Approximately 75 percent of the sediment mass resides in the >250 mm size fractions. Less than 5 percent of the mass can be found in the particle sizes less than 63 mm. The >250 mm size fraction also contributed nearly 50 percent of the total-phosphorus mass and the leaf fraction contributed an additional 30 percent. In each particle size, approximately 25 percent of the total-phosphorus mass is derived from leaves or other vegetation.
NASA Technical Reports Server (NTRS)
Ryan, R. E., Jr.; Mccarthy, P.J.; Cohen, S. H.; Yan, H.; Hathi, N. P.; Koekemoer, A. M.; Rutkowski, M. J.; Mechtley, M. R.; Windhorst, R. A.; O’Connell, R. W.;
2012-01-01
We present the size evolution of passively evolving galaxies at z approximately 2 identified in Wide-Field Camera 3 imaging from the Early Release Science program. Our sample was constructed using an analog to the passive BzK galaxy selection criterion, which isolates galaxies with little or no ongoing star formation at z greater than approximately 1.5. We identify 30 galaxies in approximately 40 arcmin(sup 2) to H less than 25 mag. By fitting the 10-band Hubble Space Telescope photometry from 0.22 micrometers less than approximately lambda (sub obs) 1.6 micrometers with stellar population synthesis models, we simultaneously determine photometric redshift, stellar mass, and a bevy of other population parameters. Based on the six galaxies with published spectroscopic redshifts, we estimate a typical redshift uncertainty of approximately 0.033(1+z).We determine effective radii from Sersic profile fits to the H-band image using an empirical point-spread function. By supplementing our data with published samples, we propose a mass-dependent size evolution model for passively evolving galaxies, where the most massive galaxies (M(sub *) approximately 10(sup 11) solar mass) undergo the strongest evolution from z approximately 2 to the present. Parameterizing the size evolution as (1 + z)(sup - alpha), we find a tentative scaling of alpha approximately equals (-0.6 plus or minus 0.7) + (0.9 plus or minus 0.4) log(M(sub *)/10(sup 9 solar mass), where the relatively large uncertainties reflect the poor sampling in stellar mass due to the low numbers of highredshift systems. We discuss the implications of this result for the redshift evolution of the M(sub *)-R(sub e) relation for red galaxies.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.
76 FR 44590 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-26
... health training. This interview will be administered to a sample of approximately 30 owners of construction businesses with 10 or fewer employees from the Greater Cincinnati area. The sample size is based... size experiences the highest fatality rate within construction (U.S. Dept. of Labor, 2008). The need...
ERIC Educational Resources Information Center
Edwards, Lynne K.; Meyers, Sarah A.
Correlation coefficients are frequently reported in educational and psychological research. The robustness properties and optimality among practical approximations when phi does not equal 0 with moderate sample sizes are not well documented. Three major approximations and their variations are examined: (1) a normal approximation of Fisher's Z,…
1980-05-01
transects extending approximately 16 kilometers from the mouth of Grays Harbor. Sub- samples were taken for grain size analysis and wood content. The...samples were thert was".d on a 1.0 mm screen to separate benthic organisms from non-living materials. Consideration of the grain size analysis ...Nutrients 17 B. Field Study 18 Methods 18 Grain Size Analysis 18 Wood Analysis 21 Wood Fragments 21 Sediment Types 21 Discussion 24 IV. BIOLOGICAL
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS
Huang, Jian; Horowitz, Joel L.; Wei, Fengrong
2010-01-01
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739
Spline smoothing of histograms by linear programming
NASA Technical Reports Server (NTRS)
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric
2016-01-01
Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927
Sample size calculations for comparative clinical trials with over-dispersed Poisson process data.
Matsui, Shigeyuki
2005-05-15
This paper develops a new formula for sample size calculations for comparative clinical trials with Poisson or over-dispersed Poisson process data. The criteria for sample size calculations is developed on the basis of asymptotic approximations for a two-sample non-parametric test to compare the empirical event rate function between treatment groups. This formula can accommodate time heterogeneity, inter-patient heterogeneity in event rate, and also, time-varying treatment effects. An application of the formula to a trial for chronic granulomatous disease is provided. Copyright 2004 John Wiley & Sons, Ltd.
Porosity of the Marcellus Shale: A contrast matching small-angle neutron scattering study
Bahadur, Jitendra; Ruppert, Leslie F.; Pipich, Vitaliy; Sakurovs, Richard; Melnichenko, Yuri B.
2018-01-01
Neutron scattering techniques were used to determine the effect of mineral matter on the accessibility of water and toluene to pores in the Devonian Marcellus Shale. Three Marcellus Shale samples, representing quartz-rich, clay-rich, and carbonate-rich facies, were examined using contrast matching small-angle neutron scattering (CM-SANS) at ambient pressure and temperature. Contrast matching compositions of H2O, D2O and toluene, deuterated toluene were used to probe open and closed pores of these three shale samples. Results show that although the mean pore radius was approximately the same for all three samples, the fractal dimension of the quartz-rich sample was higher than for the clay-rich and carbonate-rich samples, indicating different pore size distributions among the samples. The number density of pores was highest in the clay-rich sample and lowest in the quartz-rich sample. Contrast matching with water and toluene mixtures shows that the accessibility of pores to water and toluene also varied among the samples. In general, water accessed approximately 70–80% of the larger pores (>80 nm radius) in all three samples. At smaller pore sizes (~5–80 nm radius), the fraction of accessible pores decreases. The lowest accessibility to both fluids is at pore throat size of ~25 nm radii with the quartz-rich sample exhibiting lower accessibility than the clay- and carbonate-rich samples. The mechanism for this behaviour is unclear, but because the mineralogy of the three samples varies, it is likely that the inaccessible pores in this size range are associated with organics and not a specific mineral within the samples. At even smaller pore sizes (~<2.5 nm radius), in all samples, the fraction of accessible pores to water increases again to approximately 70–80%. Accessibility to toluene generally follows that of water; however, in the smallest pores (~<2.5 nm radius), accessibility to toluene decreases, especially in the clay-rich sample which contains about 30% more closed pores than the quartz- and carbonate-rich samples. Results from this study show that mineralogy of producing intervals within a shale reservoir can affect accessibility of pores to water and toluene and these mineralogic differences may affect hydrocarbon storage and production and hydraulic fracturing characteristics
Daszkiewicz, Karol; Maquer, Ghislain; Zysset, Philippe K
2017-06-01
Boundary conditions (BCs) and sample size affect the measured elastic properties of cancellous bone. Samples too small to be representative appear stiffer under kinematic uniform BCs (KUBCs) than under periodicity-compatible mixed uniform BCs (PMUBCs). To avoid those effects, we propose to determine the effective properties of trabecular bone using an embedded configuration. Cubic samples of various sizes (2.63, 5.29, 7.96, 10.58 and 15.87 mm) were cropped from [Formula: see text] scans of femoral heads and vertebral bodies. They were converted into [Formula: see text] models and their stiffness tensor was established via six uniaxial and shear load cases. PMUBCs- and KUBCs-based tensors were determined for each sample. "In situ" stiffness tensors were also evaluated for the embedded configuration, i.e. when the loads were transmitted to the samples via a layer of trabecular bone. The Zysset-Curnier model accounting for bone volume fraction and fabric anisotropy was fitted to those stiffness tensors, and model parameters [Formula: see text] (Poisson's ratio) [Formula: see text] and [Formula: see text] (elastic and shear moduli) were compared between sizes. BCs and sample size had little impact on [Formula: see text]. However, KUBCs- and PMUBCs-based [Formula: see text] and [Formula: see text], respectively, decreased and increased with growing size, though convergence was not reached even for our largest samples. Both BCs produced upper and lower bounds for the in situ values that were almost constant across samples dimensions, thus appearing as an approximation of the effective properties. PMUBCs seem also appropriate for mimicking the trabecular core, but they still underestimate its elastic properties (especially in shear) even for nearly orthotropic samples.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
Effect of the centrifugal force on domain chaos in Rayleigh-Bénard convection.
Becker, Nathan; Scheel, J D; Cross, M C; Ahlers, Guenter
2006-06-01
Experiments and simulations from a variety of sample sizes indicated that the centrifugal force significantly affects the domain-chaos state observed in rotating Rayleigh-Bénard convection-patterns. In a large-aspect-ratio sample, we observed a hybrid state consisting of domain chaos close to the sample center, surrounded by an annulus of nearly stationary nearly radial rolls populated by occasional defects reminiscent of undulation chaos. Although the Coriolis force is responsible for domain chaos, by comparing experiment and simulation we show that the centrifugal force is responsible for the radial rolls. Furthermore, simulations of the Boussinesq equations for smaller aspect ratios neglecting the centrifugal force yielded a domain precession-frequency f approximately epsilon(mu) with mu approximately equal to 1 as predicted by the amplitude-equation model for domain chaos, but contradicted by previous experiment. Additionally the simulations gave a domain size that was larger than in the experiment. When the centrifugal force was included in the simulation, mu and the domain size were consistent with experiment.
Sample size requirements for the design of reliability studies: precision consideration.
Shieh, Gwowen
2014-09-01
In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.
Size Matters. The Relevance and Hicksian Surplus of Preferred College Class Size
ERIC Educational Resources Information Center
Mandel, Philipp; Susmuth, Bernd
2011-01-01
The contribution of this paper is twofold. First, we examine the impact of class size on student evaluations of instructor performance using a sample of approximately 1400 economics classes held at the University of Munich from Fall 1998 to Summer 2007. We offer confirmatory evidence for the recent finding of a large, highly significant, and…
Approximate sample sizes required to estimate length distributions
Miranda, L.E.
2007-01-01
The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Stachowiak, Jeanne C; Shugard, Erin E; Mosier, Bruce P; Renzi, Ronald F; Caton, Pamela F; Ferko, Scott M; Van de Vreugde, James L; Yee, Daniel D; Haroldsen, Brent L; VanderNoot, Victoria A
2007-08-01
For domestic and military security, an autonomous system capable of continuously monitoring for airborne biothreat agents is necessary. At present, no system meets the requirements for size, speed, sensitivity, and selectivity to warn against and lead to the prevention of infection in field settings. We present a fully automated system for the detection of aerosolized bacterial biothreat agents such as Bacillus subtilis (surrogate for Bacillus anthracis) based on protein profiling by chip gel electrophoresis coupled with a microfluidic sample preparation system. Protein profiling has previously been demonstrated to differentiate between bacterial organisms. With the goal of reducing response time, multiple microfluidic component modules, including aerosol collection via a commercially available collector, concentration, thermochemical lysis, size exclusion chromatography, fluorescent labeling, and chip gel electrophoresis were integrated together to create an autonomous collection/sample preparation/analysis system. The cycle time for sample preparation was approximately 5 min, while total cycle time, including chip gel electrophoresis, was approximately 10 min. Sensitivity of the coupled system for the detection of B. subtilis spores was 16 agent-containing particles per liter of air, based on samples that were prepared to simulate those collected by wetted cyclone aerosol collector of approximately 80% efficiency operating for 7 min.
NASA Technical Reports Server (NTRS)
Mair, R. W.; Sen, P. N.; Hurlimann, M. D.; Patz, S.; Cory, D. G.; Walsworth, R. L.
2002-01-01
We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Pade approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Pade interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Pade length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).
Mair, R W; Sen, P N; Hürlimann, M D; Patz, S; Cory, D G; Walsworth, R L
2002-06-01
We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Padé approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Padé interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Padé length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
NASA Astrophysics Data System (ADS)
Sañé, E.; Chiocci, F. L.; Basso, D.; Martorelli, E.
2016-10-01
The effects of different environmental factors controlling the distribution of different morphologies, sizes and growth forms of rhodoliths in the western Pontine Archipelago have been studied. The analysis of 231 grab samples has been integrated with 68 remotely operated vehicle (ROV) videos (22 h) and a high resolution (<1 m) side scan sonar mosaic of the seafloor surrounding the Archipelago, covering an area of approximately 460 km2. Living rhodoliths were collected in approximately 10% of the grab samples and observed in approximately 30% of the ROV dives. The combination of sediment sampling, video surveys and acoustic facies mapping suggested that the presence of rhodoliths can be associated to the dishomogeneous high backscatter sonar facies and high backscatter facies. Both pralines and unattached branches were found to be the most abundant morphological groups (50% and 41% of samples, respectively), whereas boxwork rhodoliths were less common, accounting only for less than 10% of the total number of samples. Pralines and boxwork rhodoliths were almost equally distributed among large (28%), medium (36%) and small sizes (36%). Pralines generally presented a fruticose growth form (49% of pralines) even if pralines with encrusting-warty (36% of pralines) or lumpy (15% of pralines) growth forms were also present. Morphologies, sizes and growth forms vary mainly along the depth gradient. Large rhodoliths with a boxwork morphology are abundant at depth, whereas unattached branches and, in general, rhodoliths with a high protuberance degree are abundant in shallow waters. The exposure to storm waves and bottom currents related to geostrofic circulation could explain the absence of rhodoliths off the eastern side of the three islands forming the Archipelago.
Theory and applications of a deterministic approximation to the coalescent model
Jewett, Ethan M.; Rosenberg, Noah A.
2014-01-01
Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419
Schillaci, Michael A; Schillaci, Mario E
2009-02-01
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L
2014-01-01
The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit drugs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Automated sampling assessment for molecular simulations using the effective sample size
Zhang, Xin; Bhatt, Divesh; Zuckerman, Daniel M.
2010-01-01
To quantify the progress in the development of algorithms and forcefields used in molecular simulations, a general method for the assessment of the sampling quality is needed. Statistical mechanics principles suggest the populations of physical states characterize equilibrium sampling in a fundamental way. We therefore develop an approach for analyzing the variances in state populations, which quantifies the degree of sampling in terms of the effective sample size (ESS). The ESS estimates the number of statistically independent configurations contained in a simulated ensemble. The method is applicable to both traditional dynamics simulations as well as more modern (e.g., multi–canonical) approaches. Our procedure is tested in a variety of systems from toy models to atomistic protein simulations. We also introduce a simple automated procedure to obtain approximate physical states from dynamic trajectories: this allows sample–size estimation in systems for which physical states are not known in advance. PMID:21221418
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.
Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai
2014-12-18
A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.
Sample size calculation for studies with grouped survival data.
Li, Zhiguo; Wang, Xiaofei; Wu, Yuan; Owzar, Kouros
2018-06-10
Grouped survival data arise often in studies where the disease status is assessed at regular visits to clinic. The time to the event of interest can only be determined to be between two adjacent visits or is right censored at one visit. In data analysis, replacing the survival time with the endpoint or midpoint of the grouping interval leads to biased estimators of the effect size in group comparisons. Prentice and Gloeckler developed a maximum likelihood estimator for the proportional hazards model with grouped survival data and the method has been widely applied. Previous work on sample size calculation for designing studies with grouped data is based on either the exponential distribution assumption or the approximation of variance under the alternative with variance under the null. Motivated by studies in HIV trials, cancer trials and in vitro experiments to study drug toxicity, we develop a sample size formula for studies with grouped survival endpoints that use the method of Prentice and Gloeckler for comparing two arms under the proportional hazards assumption. We do not impose any distributional assumptions, nor do we use any approximation of variance of the test statistic. The sample size formula only requires estimates of the hazard ratio and survival probabilities of the event time of interest and the censoring time at the endpoints of the grouping intervals for one of the two arms. The formula is shown to perform well in a simulation study and its application is illustrated in the three motivating examples. Copyright © 2018 John Wiley & Sons, Ltd.
Thermal probe design for Europa sample acquisition
NASA Astrophysics Data System (ADS)
Horne, Mera F.
2018-01-01
The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.
[Method for concentration determination of mineral-oil fog in the air of workplace].
Xu, Min; Zhang, Yu-Zeng; Liu, Shi-Feng
2008-05-01
To study the method of concentration determination of mineral-oil fog in the air of workplace. Four filter films such as synthetic fabric filter film, beta glass fiber filter film, chronic filter paper and microporous film were used in this study. Two kinds of dust samplers were used to collect the sample, one sampling at fast flow rate in a short time and the other sampling at slow flow rate with long duration. Subsequently, the filter membrane was weighed with electronic analytical balance. According to sampling efficiency and incremental size, the adsorbent ability of four different filter membranes was compared. When the flow rate was between 10 approximately 20 L/min and the sampling time was between 10 approximately 15 min, the average sampling efficiency of synthetic fabric filter film was 95.61% and the increased weight ranged from 0.87 to 2.60 mg. When the flow rate was between 10 approximately 20 L/min and sampling time was between 10 approximately 15 min, the average sampling efficiency of beta glass fiber filter film was 97.57% and the increased weight was 0.75 approximately 2.47 mg. When the flow rate was between 5 approximately 10 L/min and the sampling time between 10 approximately 20 min, the average sampling efficiency of chronic filter paper and microporous film was 48.94% and 63.15%, respectively and the increased weight was 0.75 approximately 2.15 mg and 0.23 approximately 0.85 mg, respectively. When the flow rate was 3.5 L/min and the sampling time was between 100 approximately 166 min, the average sampling efficiency of filter film were 94.44% and 93.45%, respectively and the average increased weight was 1.28 mg for beta glass fiber filter film and 0.78 mg for beta glass fiber filter film and synthetic fabric synthetic fabric filter film. The average sampling efficiency of chronic filter paper and microporous film were 37.65% and 88.21%, respectively. The average increased weight was 4.30 mg and 1.23 mg, respectively. Sampling with synthetic fabric filter film and beta glass fiber filter film is credible, accurate, simple and feasible for determination of the concentration of mineral-oil fog in workplaces.
Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran
2018-06-22
Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.
A study on modification of nanoporous rice husk silica for hydrophobic nano filter.
Kim, Hee Jin; So, Soo Jeong; Han, Chong Soo
2010-05-01
Nanoporous rice husk silica (RHS) was modified with alkylsilylation reagents, hexamethyldisilazane, diethoxydiphenylsilane, dichlorodimethylsilane and n-octodecyltrimethoxysilane. The silica samples were characterized with Raman spectrometer, thermal gravimetric analyzer, scanning electron microscope, nitrogen adsorption measurement and solid state nuclear magnetic resonance spectrometer. Raman spectra of the modified silica showed growth of the peaks of C-H stretching and CH3 bending at approximateluy 3000 cm(-1) and approximately 1500 cm(-1), respectively. Weight losses of 3 approximately 5% were observed in thermo gravimetric profiles of the modified silica. The microscopic shape of RHS, approximately 20 nm primary particles and their aggregates was almost not changed by the modification but there were colligations of the silica particles in the sample treated with dichlorodimethylsilane or diethoxydiphenylsilane. BET adsorption experiment showed the modification significantly decreased the mean pore size of the silica from approximately 5 nm to approximately 4 nm as well as the pore volume from 0.5 cm3/g to 0.4 cm3/g except the case of treatment with n-octodecyltrimethoxysilane. 29Si Solid NMR Spectra of the silica samples showed that there were decrease in the relative intensities of Q2 and Q3 peaks and large increments in Q4 after the modification except for the case of bulky n-octodecyltrimethoxysilane. From the results, it was concluded that the alkylsilylation reagents reacted with hydroxyl groups on the silica particles as well as in the nano pores while the size of the reagent molecule affected its diffusion and reaction with the hydroxyl groups in the pores.
Kistner, Emily O; Muller, Keith E
2004-09-01
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.
MaNGA: Target selection and Optimization
NASA Astrophysics Data System (ADS)
Wake, David
2015-01-01
The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.
MaNGA: Target selection and Optimization
NASA Astrophysics Data System (ADS)
Wake, David
2016-01-01
The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.
Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.
Gajewski, Byron J; Mayo, Matthew S
2006-08-15
A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.
21 CFR 161.173 - Canned wet pack shrimp in transparent or nontransparent containers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (dorsal tract, back vein, or sand vein). (ii) Deveined shrimp containing not less than 95 percent by...) Acceptable quality level (AQL). The maximum percent of defective sample units permitted in a lot that will be accepted approximately 95 percent of the time. (ii) Sampling plans: Acceptable Quality Level 6.5 Lot size...
In situ measurement of particulate number density and size distribution from an aircraft
NASA Technical Reports Server (NTRS)
Briehl, D.
1974-01-01
Commercial particulate measuring instruments were flown aboard the NASA Convair 990. A condensation nuclei monitor was utilized to measure particles larger than approximately 0.003 micrometers in diameter. A specially designed pressurization system was used with this counter so that the sample could be fed into the monitor at cabin altitude pressure. A near-forward light scattering counter was used to measure the number and size distribution particles in the size range from 0.5 to 5 micrometers and greater in diameter.
Level of Service Program for INDOT Operations : APPENDIX B Sub-District Sample Sizes
DOT National Transportation Integrated Search
2012-01-01
INDOT has used an inspection program named Maintenance Quality Survey (MQS) to perform a statewide inspection of their roadway assets, rightofway to rightofway. This inspection requires two twoperson teams approximately 18 months to...
NASA Astrophysics Data System (ADS)
Yeti Nuryantini, Ade; Cahya Septia Mahen, Ea; Sawitri, Asti; Wahid Nuryadin, Bebeh
2017-09-01
In this paper, we report on a homemade optical spectrometer using diffraction grating and image processing techniques. This device was designed to produce spectral images that could then be processed by measuring signal strength (pixel intensity) to obtain the light source, transmittance, and absorbance spectra of the liquid sample. The homemade optical spectrometer consisted of: (i) a white LED as a light source, (ii) a cuvette or sample holder, (iii) a slit, (iv) a diffraction grating, and (v) a CMOS camera (webcam). In this study, various concentrations of a carbon nanoparticle (CNP) colloid were used in the particle size sample test. Additionally, a commercial optical spectrometer and tunneling electron microscope (TEM) were used to characterize the optical properties and morphology of the CNPs, respectively. The data obtained using the homemade optical spectrometer, commercial optical spectrometer, and TEM showed similar results and trends. Lastly, the calculation and measurement of CNP size were performed using the effective mass approximation (EMA) and TEM. These data showed that the average nanoparticle sizes were approximately 2.4 nm and 2.5 ± 0.3 nm, respectively. This research provides new insights into the development of a portable, simple, and low-cost optical spectrometer that can be used in nanomaterial characterization for physics undergraduate instruction.
Permeability During Magma Expansion and Compaction
NASA Astrophysics Data System (ADS)
Gonnermann, Helge. M.; Giachetti, Thomas; Fliedner, Céline; Nguyen, Chinh T.; Houghton, Bruce F.; Crozier, Joshua A.; Carey, Rebecca J.
2017-12-01
Plinian lapilli from the 1060 Common Era Glass Mountain rhyolitic eruption of Medicine Lake Volcano, California, were collected and analyzed for vesicularity and permeability. A subset of the samples were deformed at a temperature of 975°, under shear and normal stress, and postdeformation porosities and permeabilities were measured. Almost all undeformed samples fall within a narrow range of vesicularity (0.7-0.9), encompassing permeabilities between approximately 10-15 m2 and 10-10 m2. A percolation threshold of approximately 0.7 is required to fit the data by a power law, whereas a percolation threshold of approximately 0.5 is estimated by fitting connected and total vesicularity using percolation modeling. The Glass Mountain samples completely overlap with a range of explosively erupted silicic samples, and it remains unclear whether the erupting magmas became permeable at porosities of approximately 0.7 or at lower values. Sample deformation resulted in compaction and vesicle connectivity either increased or decreased. At small strains permeability of some samples increased, but at higher strains permeability decreased. Samples remain permeable down to vesicularities of less than 0.2, consistent with a potential hysteresis in permeability-porosity between expansion (vesiculation) and compaction (outgassing). We attribute this to retention of vesicle interconnectivity, albeit at reduced vesicle size, as well as bubble coalescence during shear deformation. We provide an equation that approximates the change in permeability during compaction. Based on a comparison with data from effusively erupted silicic samples, we propose that this equation can be used to model the change in permeability during compaction of effusively erupting magmas.
Computational tools for exact conditional logistic regression.
Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P
Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Taheriniya, Shabnam; Parhizgar, Sara Sadat; Sari, Amir Hossein
2018-06-01
To study the alumina template pore size distribution as a function of Al thin film grain size distribution, porous alumina templates were prepared by anodizing sputtered aluminum thin films. To control the grain size the aluminum samples were sputtered with the rate of 0.5, 1 and 2 Å/s and the substrate temperature was either 25, 75 or 125 °C. All samples were anodized for 120 s in 1 M sulfuric acid solution kept at 1 °C while a 15 V potential was being applied. The standard deviation value for samples deposited at room temperature but with different rates is roughly 2 nm in both thin film and porous template form but it rises to approximately 4 nm with substrate temperature. Samples with the average grain size of 13, 14, 18.5 and 21 nm respectively produce alumina templates with an average pore size of 8.5, 10, 15 and 16 nm in that order which shows the average grain size limits the average pore diameter in the resulting template. Lateral correlation length and grain boundary effect are other factors that affect the pore formation process and pore size distribution by limiting the initial current density.
Alisa P. Ramakrishnan; Susan Meyer; Daniel J. Fairbanks; Craig E. Coleman
2006-01-01
Bromus tectorum (cheatgrass or downy brome) is an exotic annual weed that is abundant in western USA. We examined variation in six microsatellite loci for 17 populations representing a range of habitats in Utah, Idaho, Nevada and Colorado (USA) and then intensively sampled four representative populations, for a total sample size of approximately 1000 individuals. All...
ERIC Educational Resources Information Center
Bellera, Carine A.; Julien, Marilyse; Hanley, James A.
2010-01-01
The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobo Lapidus, R.; Gates, B
2009-01-01
Supported metals prepared from H{sub 3}Re{sub 3}(CO){sub 12} on {gamma}-Al{sub 2}O{sub 3} were treated under conditions that led to various rhenium structures on the support and were tested as catalysts for n-butane conversion in the presence of H{sub 2} in a flow reactor at 533 K and 1 atm. After use, two samples were characterized by X-ray absorption edge positions of approximately 5.6 eV (relative to rhenium metal), indicating that the rhenium was cationic and essentially in the same average oxidation state in each. But the Re-Re coordination numbers found by extended X-ray absorption fine structure spectroscopy (2.2 and 5.1)more » show that the clusters in the two samples were significantly different in average nuclearity despite their indistinguishable rhenium oxidation states. Spectra of a third sample after catalysis indicate approximately Re{sub 3} clusters, on average, and an edge position of 4.5 eV. Thus, two samples contained clusters approximated as Re{sub 3} (on the basis of the Re-Re coordination number), on average, with different average rhenium oxidation states. The data allow resolution of the effects of rhenium oxidation state and cluster size, both of which affect the catalytic activity; larger clusters and a greater degree of reduction lead to increased activity.« less
A System Approach to Navy Medical Education and Training. Appendix 18. Radiation Technician.
1974-08-31
attrition was forecast to approximate twenty percent, final sample and sub-sample sizes were adjusted accordingly. Stratified random sampling... HYPERTENSIVE INTRAVENOUS PYELOGRAMS 2 ITAKE RENAL LOOPOGRAMI I 3 ITAKE CIXU, I.Eo CONSTANT INFUSION 4 10 RENAL SPLIT FUNCTION TEST, E.G. STAMEY 5...ITAKE PORTAL FILM OF AREA BEING TREATED WITH COBALT 32 [INFORM DOCTOR OF UNEXPECTED X-RAY FINDINGS 33 IREAD X-RAY FILMS FOR TECHNICAL ADEQUACY 34
Coalescence computations for large samples drawn from populations of time-varying sizes
Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek
2017-01-01
We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404
NASA Technical Reports Server (NTRS)
Banerjee, S. K.
1974-01-01
The direction and magnitude of natural remanent magnetization of five approximately 3-g subsamples of 72275 and 72255 and the high field saturation magnetization, coercive force, and isothermal remanent magnetization of 100-mg chip from each of these samples, were studied. Given an understanding of the magnetization processes, group 1 experiments provide information about the absolute direction of the ancient magnetizing field and a qualitative estimate of its size (paleointensity). The group 2 experiments yield a quantitative estimate of the iron content and a qualitative ideal of the grain sizes.
Extraction of hydrocarbons from high-maturity Marcellus Shale using supercritical carbon dioxide
Jarboe, Palma B.; Philip A. Candela,; Wenlu Zhu,; Alan J. Kaufman,
2015-01-01
Shale is now commonly exploited as a hydrocarbon resource. Due to the high degree of geochemical and petrophysical heterogeneity both between shale reservoirs and within a single reservoir, there is a growing need to find more efficient methods of extracting petroleum compounds (crude oil, natural gas, bitumen) from potential source rocks. In this study, supercritical carbon dioxide (CO2) was used to extract n-aliphatic hydrocarbons from ground samples of Marcellus shale. Samples were collected from vertically drilled wells in central and western Pennsylvania, USA, with total organic carbon (TOC) content ranging from 1.5 to 6.2 wt %. Extraction temperature and pressure conditions (80 °C and 21.7 MPa, respectively) were chosen to represent approximate in situ reservoir conditions at sample depth (1920−2280 m). Hydrocarbon yield was evaluated as a function of sample matrix particle size (sieve size) over the following size ranges: 1000−500 μm, 250−125 μm, and 63−25 μm. Several methods of shale characterization including Rock-Eval II pyrolysis, organic petrography, Brunauer−Emmett−Teller surface area, and X-ray diffraction analyses were also performed to better understand potential controls on extraction yields. Despite high sample thermal maturity, results show that supercritical CO2 can liberate diesel-range (n-C11 through n-C21) n-aliphatic hydrocarbons. The total quantity of extracted, resolvable n-aliphatic hydrocarbons ranges from approximately 0.3 to 12 mg of hydrocarbon per gram of TOC. Sieve size does have an effect on extraction yield, with highest recovery from the 250−125 μm size fraction. However, the significance of this effect is limited, likely due to the low size ranges of the extracted shale particles. Additional trends in hydrocarbon yield are observed among all samples, regardless of sieve size: 1) yield increases as a function of specific surface area (r2 = 0.78); and 2) both yield and surface area increase with increasing TOC content (r2 = 0.97 and 0.86, respectively). Given that supercritical CO2 is able to mobilize residual organic matter present in overmature shales, this study contributes to a better understanding of the extent and potential factors affecting the extraction process.
Gibb-Snyder, Emily; Gullett, Brian; Ryan, Shawn; Oudejans, Lukas; Touati, Abderrahmane
2006-08-01
Size-selective sampling of Bacillus anthracis surrogate spores from realistic, common aerosol mixtures was developed for analysis by laser-induced breakdown spectroscopy (LIBS). A two-stage impactor was found to be the preferential sampling technique for LIBS analysis because it was able to concentrate the spores in the mixtures while decreasing the collection of potentially interfering aerosols. Three common spore/aerosol scenarios were evaluated, diesel truck exhaust (to simulate a truck running outside of a building air intake), urban outdoor aerosol (to simulate common building air), and finally a protein aerosol (to simulate either an agent mixture (ricin/anthrax) or a contaminated anthrax sample). Two statistical methods, linear correlation and principal component analysis, were assessed for differentiation of surrogate spore spectra from other common aerosols. Criteria for determining percentages of false positives and false negatives via correlation analysis were evaluated. A single laser shot analysis of approximately 4 percent of the spores in a mixture of 0.75 m(3) urban outdoor air doped with approximately 1.1 x 10(5) spores resulted in a 0.04 proportion of false negatives. For that same sample volume of urban air without spores, the proportion of false positives was 0.08.
Estimating the quadratic mean diameters of fine woody debris in forests of the United States
Christopher W. Woodall; Vicente J. Monleon
2010-01-01
Most fine woody debris (FWD) line-intersect sampling protocols and associated estimators require an approximation of the quadratic mean diameter (QMD) of each individual FWD size class. There is a lack of empirically derived QMDs by FWD size class and species/forest type across the U.S. The objective of this study is to evaluate a technique known as the graphical...
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
Mapping South San Francisco Bay's seabed diversity for use in wetland restoration planning
Fregoso, Theresa A.; Jaffe, B.; Rathwell, G.; Collins, W.; Rhynas, K.; Tomlin, V.; Sullivan, S.
2006-01-01
Data for an acoustic seabed classification were collected as a part of a California Coastal Conservancy funded bathymetric survey of South Bay in early 2005. A QTC VIEW seabed classification system recorded echoes from a sungle bean 50 kHz echosounder. Approximately 450,000 seabed classification records were generated from an are of of about 30 sq. miles. Ten district acoustic classes were identified through an unsupervised classification system using principle component and cluster analyses. One hundred and sixty-one grab samples and forty-five benthic community composition data samples collected in the study area shortly before and after the seabed classification survey, further refined the ten classes into groups based on grain size. A preliminary map of surficial grain size of South Bay was developed from the combination of the seabed classification and the grab and benthic samples. The initial seabed classification map, the grain size map, and locations of sediment samples will be displayed along with the methods of acousitc seabed classification.
A field instrument for quantitative determination of beryllium by activation analysis
Vaughn, William W.; Wilson, E.E.; Ohm, J.M.
1960-01-01
A low-cost instrument has been developed for quantitative determinations of beryllium in the field by activation analysis. The instrument makes use of the gamma-neutron reaction between gammas emitted by an artificially radioactive source (Sb124) and beryllium as it occurs in nature. The instrument and power source are mounted in a panel-type vehicle. Samples are prepared by hand-crushing the rock to approximately ?-inch mesh size and smaller. Sample volumes are kept constant by means of a standard measuring cup. Instrument calibration, made by using standards of known BeO content, indicates the analyses are reproducible and accurate to within ? 0.25 percent BeO in the range from 1 to 20 percent BeO with a sample counting time of 5 minutes. Sensitivity of the instrument maybe increased somewhat by increasing the source size, the sample size, or by enlarging the cross-sectional area of the neutron-sensitive phosphor normal to the neutron flux.
Evaluation of residual uranium contamination in the dirt floor of an abandoned metal rolling mill.
Glassford, Eric; Spitz, Henry; Lobaugh, Megan; Spitler, Grant; Succop, Paul; Rice, Carol
2013-02-01
A single, large, bulk sample of uranium-contaminated material from the dirt floor of an abandoned metal rolling mill was separated into different types and sizes of aliquots to simulate samples that would be collected during site remediation. The facility rolled approximately 11,000 tons of hot-forged ingots of uranium metal approximately 60 y ago, and it has not been used since that time. Thirty small mass (≈ 0.7 g) and 15 large mass (≈ 70 g) samples were prepared from the heterogeneously contaminated bulk material to determine how measurements of the uranium contamination vary with sample size. Aliquots of bulk material were also resuspended in an exposure chamber to produce six samples of respirable particles that were obtained using a cascade impactor. Samples of removable surface contamination were collected by wiping 100 cm of the interior surfaces of the exposure chamber with 47-mm-diameter fiber filters. Uranium contamination in each of the samples was measured directly using high-resolution gamma ray spectrometry. As expected, results for isotopic uranium (i.e., U and U) measured with the large-mass and small-mass samples are significantly different (p < 0.001), and the coefficient of variation (COV) for the small-mass samples was greater than for the large-mass samples. The uranium isotopic concentrations measured in the air and on the wipe samples were not significantly different and were also not significantly different (p > 0.05) from results for the large- or small-mass samples. Large-mass samples are more reliable for characterizing heterogeneously distributed radiological contamination than small-mass samples since they exhibit the least variation compared to the mean. Thus, samples should be sufficiently large in mass to insure that the results are truly representative of the heterogeneously distributed uranium contamination present at the facility. Monitoring exposure of workers and the public as a result of uranium contamination resuspended during site remediation should be evaluated using samples of sufficient size and type to accommodate the heterogeneous distribution of uranium in the bulk material.
2012-01-01
The properties of CaCu3.1Ti4O12.1 [CC3.1TO] ceramics with the addition of Al2O3 nanoparticles, prepared via a solid-state reaction technique, were investigated. The nanoparticle additive was found to inhibit grain growth with the average grain size decreasing from approximately 7.5 μm for CC3.1TO to approximately 2.0 μm for the unmodified samples, while the Knoop hardness value was found to improve with a maximum value of 9.8 GPa for the 1 vol.% Al2O3 sample. A very high dielectric constant > 60,000 with a low loss tangent (approximately 0.09) was observed for the 0.5 vol.% Al2O3 sample at 1 kHz and at room temperature. These data suggest that nanocomposites have a great potential for dielectric applications. PMID:22221316
Bed-sediment grain-size and morphologic data from Suisun, Grizzly, and Honker Bays, CA, 1998-2002
Hampton, Margaret A.; Snyder, Noah P.; Chin, John L.; Allison, Dan W.; Rubin, David M.
2003-01-01
The USGS Place Based Studies Program for San Francisco Bay investigates this sensitive estuarine system to aid in resource management. As part of the inter-disciplinary research program, the USGS collected side-scan sonar data and bed-sediment samples from north San Francisco Bay to characterize bed-sediment texture and investigate temporal trends in sedimentation. The study area is located in central California and consists of Suisun Bay, and Grizzly and Honker Bays, sub-embayments of Suisun Bay. During the study (1998-2002), the USGS collected three side-scan sonar data sets and approximately 300 sediment samples. The side-scan data revealed predominantly fine-grained material on the bayfloor. We also mapped five different bottom types from the data set, categorized as featureless, furrows, sand waves, machine-made, and miscellaneous. We performed detailed grain-size and statistical analyses on the sediment samples. Overall, we found that grain size ranged from clay to fine sand, with the coarsest material in the channels and finer material located in the shallow bays. Grain-size analyses revealed high spatial variability in size distributions in the channel areas. In contrast, the shallow regions exhibited low spatial variability and consistent sediment size over time.
Approximated affine projection algorithm for feedback cancellation in hearing aids.
Lee, Sangmin; Kim, In-Young; Park, Young-Cheol
2007-09-01
We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.
Molten salt synthesis of nanocrystalline phase of high dielectric constant material CaCu3Ti4O12.
Prakash, B Shri; Varma, K B R
2008-11-01
Nanocrystalline powders of giant dielectric constant material, CaCu3Ti4O12 (CCTO), have been prepared successfully by the molten salt synthesis (MSS) using KCl at 750 degrees C/10 h, which is significantly lower than the calcination temperature (approximately 1000 degrees C) that is employed to obtain phase pure CCTO in the conventional solid-state reaction route. The water washed molten salt synthesized powder, characterized by X-ray powder diffraction (XRD), Scanning electron microscopy (SEM), and Transmission electron microscopy (TEM) confirmed to be a phase pure CCTO associated with approximately 150 nm sized crystallites of nearly spherical shape. The decrease in the formation temperature/duration of CCTO in MSS method was attributed to an increase in the diffusion rate or a decrease in the diffusion length of reacting ions in the molten salt medium. As a consequence of liquid phase sintering, pellets of as-synthesized KCl containing CCTO powder exhibited higher sinterability and grain size than that of KCl free CCTO samples prepared by both MSS method and conventional solid-state reaction route. The grain size and the dielectric constant of KCl containing CCTO ceramics increased with increasing sintering temperature (900 degrees C-1050 degrees C). Indeed the dielectric constants of these ceramics were higher than that of KCl free CCTO samples prepared by both MSS method and those obtained via the solid-state reaction route and sintered at the same temperature. Internal barrier layer capacitance (IBLC) model was invoked to correlate the observed dielectric constant with the grain size in these samples.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Optimizing Integrated Terminal Airspace Operations Under Uncertainty
NASA Technical Reports Server (NTRS)
Bosson, Christabelle; Xue, Min; Zelinski, Shannon
2014-01-01
In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.
Christensen, Janne Ørskov; Schultz, Kirsten; Mollgaard, Birgitte; Kristensen, Henning Gjelstrup; Mullertz, Anette
2004-11-01
The partitioning of poorly soluble drugs into an aqueous micellar phase was exploited using an in vitro lipid digestion model, simulating the events taking place during digestion of acylglycerols in the duodenum. The aqueous micellar phase was isolated after ultracentrifugation of samples obtained at different degrees of triacylglycerol hydrolysis. Flupentixol, 1'-[4-[1-(4-fluorophenyl)-1-H-indol-3-yl]-1-butyl]spiro[iso-benzofuran-1(3H), 4' piperidine] (LU 28-179) and probucol were studied. The effect of the alkyl chain length of the triacylglycerol was studied using a medium-chain triacylglycerol (MCT) and a long-chain triacylglycerol (LCT), respectively. In general, an oil solution was used as the lipid source in the model. Samples were analysed in regard to micellar size, lipid composition and drug concentration. During lipolysis, the content of lipolytic products in the aqueous micellar phase increased. The micellar size (R(H) approximately 3 nm) only increased when long-chain lipolytic products were incorporated in the mixed micelles (R(H) approximately 7.8 nm). Flupentixol was quickly transferred to the mixed micelles due to high solubility in this phase (100% released). A tendency towards higher solubilisation of LU 28-179, when it was administered in the LCT (approximately 24% released) compared to when it was administered in the MCT (approximately 15% released) at 70% hydrolysis, and a lagphase was observed. There was no difference in the solubilisation of probucol using MCT or LCT ( approximately 20% released), respectively. Differences in the physicochemical properties of the drugs resulted in differences in their distribution between the phases arising during lipolysis.
Function approximation and documentation of sampling data using artificial neural networks.
Zhang, Wenjun; Barrion, Albert
2006-11-01
Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.
Approximate number word knowledge before the cardinal principle.
Gunderson, Elizabeth A; Spaepen, Elizabet; Levine, Susan C
2015-02-01
Approximate number word knowledge-understanding the relation between the count words and the approximate magnitudes of sets-is a critical piece of knowledge that predicts later math achievement. However, researchers disagree about when children first show evidence of approximate number word knowledge-before, or only after, they have learned the cardinal principle. In two studies, children who had not yet learned the cardinal principle (subset-knowers) produced sets in response to number words (verbal comprehension task) and produced number words in response to set sizes (verbal production task). As evidence of approximate number word knowledge, we examined whether children's numerical responses increased with increasing numerosity of the stimulus. In Study 1, subset-knowers (ages 3.0-4.2 years) showed approximate number word knowledge above their knower-level on both tasks, but this effect did not extend to numbers above 4. In Study 2, we collected data from a broader age range of subset-knowers (ages 3.1-5.6 years). In this sample, children showed approximate number word knowledge on the verbal production task even when only examining set sizes above 4. Across studies, children's age predicted approximate number word knowledge (above 4) on the verbal production task when controlling for their knower-level, study (1 or 2), and parents' education, none of which predicted approximation ability. Thus, children can develop approximate knowledge of number words up to 10 before learning the cardinal principle. Furthermore, approximate number word knowledge increases with age and might not be closely related to the development of exact number word knowledge. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Marty, Adam J.
The purpose of this research is to demonstrate the ability to generate and characterize a nanometer sized aerosol using solutions, suspensions, and a bulk nanopowder, and to research the viability of using an acoustic dry aerosol generator/elutriator (ADAGE) to aerosolize a bulk nanopowder into a nanometer sized aerosol. The research compares the results from a portable scanning mobility particle sizer (SMPS) to the more traditional method of counting and sizing particles on a filter sample using scanning electron microscopy (SEM). Sodium chloride aerosol was used for the comparisons. The sputter coating thickness, a conductive coating necessary for SEM, was measured on different sizes of polystyrene latex spheres (PSLS). Aluminum oxide powder was aerosolized using an ADAGE and several different support membranes and sound frequency combinations were explored. A portable SMPS was used to determine the size distributions of the generated aerosols. Polycarbonate membrane (PCM) filter samples were collected for subsequent SEM analysis. The particle size distributions were determined from photographs of the membrane filters. SMPS data and membrane samples were collected simultaneously. The sputter coating thicknesses on four different sizes of PSLS, range 57 nanometers (nm) to 220 nm, were measured using transmission electron microscopy and the results from the SEM and SMPS were compared after accounting for the sputter coating thickness. Aluminum oxide nanopowder (20 nm) was aerosolized using a modified ADAGE technique. Four different support membranes and four different sound frequencies were tested with the ADAGE. The aerosol was collected onto PCM filters and the samples were examined using SEM. The results indicate that the SMPS and SEM distributions were log-normally distributed with a median diameter of approximately 42 nm and 55 nm, respectively, and geometric standard deviations (GSD) of approximately 1.6 and 1.7, respectively. The two methods yielded similar distributional trends with a difference in median diameters of approximately 11 -- 15 nm. The sputter coating thickness on the different sizes of PSLSs ranged from 15.4 -- 17.4 nm. The aerosols generated, using the modified ADAGE, were low in concentration. The particles remained as agglomerates and varied widely in size. An aluminum foil support membrane coupled with a high sound frequency generated the smallest agglomerates. A well characterized sodium chloride aerosol was generated and was reproducible. The distributions determined using SEM were slightly larger than those obtained from SMPS, however, the distributions had relatively the same shape as reflected in their GSDs. This suggests that a portable SMPS is a suitable method for characterizing a nanoaerosol. The sizing techniques could be compared after correcting for the effects of the sputter coating necessary for SEM examination. It was determined that the sputter coating thickness on nano-sized particles and particles up to approximately 220 nm can be expected to be the same and that the sputter coating can add considerably to the size of a nanoparticle. This has important implications for worker health where nanoaerosol exposure is a concern. The sputter coating must be considered when SEM is used to describe a nanoaerosol exposure. The performance of the modified ADAGE was less than expected. The low aerosol output from the ADAGE prevented a more detailed analysis and was limited to only a qualitative comparison. Some combinations of support membranes and sound frequencies performed better than others, particularly conductive support membranes and high sound frequencies. In conclusion, a portable SMPS yielded results similar to those obtained by SEM. The sputter coating was the same thickness on the PSLSs studied. The sputter coating thickness must be considered when characterizing nanoparticles using SEM. Finally, a conductive support membrane and higher frequencies appeared to generate the smallest agglomerates using the ADAGE technique.
Glass frit nebulizer for atomic spectrometry
Layman, L.R.
1982-01-01
The nebuilizatlon of sample solutions Is a critical step In most flame or plasma atomic spectrometrlc methods. A novel nebulzatlon technique, based on a porous glass frit, has been Investigated. Basic operating parameters and characteristics have been studied to determine how thte new nebulizer may be applied to atomic spectrometrlc methods. The results of preliminary comparisons with pneumatic nebulizers Indicate several notable differences. The frit nebulizer produces a smaller droplet size distribution and has a higher sample transport efficiency. The mean droplet size te approximately 0.1 ??m, and up to 94% of the sample te converted to usable aerosol. The most significant limitations In the performance of the frit nebulizer are the stow sample equMbratton time and the requirement for wash cycles between samples. Loss of solute by surface adsorption and contamination of samples by leaching from the glass were both found to be limitations only In unusual cases. This nebulizer shows great promise where sample volume te limited or where measurements require long nebullzatlon times.
A Poisson process approximation for generalized K-5 confidence regions
NASA Technical Reports Server (NTRS)
Arsham, H.; Miller, D. R.
1982-01-01
One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.
A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Cheevatanarak, Suchittra
Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…
SGAS 143845.1 + 145407: A Big, Cool Starburst at Redshift 0.816
NASA Technical Reports Server (NTRS)
Gladders, Michael D.; Rigby, Jane R.; Sharon, Keren; Wuyts, Eva; Abramson, Louis E.; Dahle, Hakon; Persson, S. E.; Monson, Andrew J.; Kelson, Daniel D.; Benford, Dominic J.;
2012-01-01
We present the discovery and a detailed multi-wavelength study of a strongly-lensed luminous infrared galaxy at z=0.816. Unlike most known lensed galaxies discovered at optical or near-infrared wavelengths, this lensed source is red, which the data presented here demonstrate is due to ongoing dusty star formation. The overall lensing magnification (a factor of 17) facilitates observations from the blue optical through to 500 micrometers, fully capturing both the stellar photospheric emission as well as the reprocessed thermal dust emission. We also present optical and near-IR spectroscopy. These extensive data show that this lensed galaxy is in many ways typical of IR-detected sources at z approximates 1, with both a total luminosity and size in accordance with other (albeit much less detailed) measurements in samples of galaxies observed in deep fields with the Spitzer telescope. Its far-infrared spectral energy distribution is well-fit by local templates that are an order of magnitude less luminous than the lensed galaxy; local templates of comparable luminosity are too hot to fit. Its size (D approximately 7 kpc) is much larger than local luminous infrared galaxies, but in line with sizes observed for such galaxies at z approximates 1. The star formation appears uniform across this spatial scale. Thus, this lensed galaxy, which appears representative of vigorously star-forming z approximates 1 galaxies, is forming stars in a fundamentally different mode than is seen at z approximates 0.
"Magnitude-based inference": a statistical review.
Welsh, Alan H; Knight, Emma J
2015-04-01
We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.
Sample Size Methods for Estimating HIV Incidence from Cross-Sectional Surveys
Brookmeyer, Ron
2015-01-01
Summary Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this paper we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this paper at the Biometrics website on Wiley Online Library. PMID:26302040
Sample size methods for estimating HIV incidence from cross-sectional surveys.
Konikoff, Jacob; Brookmeyer, Ron
2015-12-01
Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.
Everall, Neil J; Priestnall, Ian M; Clarke, Fiona; Jayes, Linda; Poulter, Graham; Coombs, David; George, Michael W
2009-03-01
This paper describes preliminary investigations into the spatial resolution of macro attenuated total reflection (ATR) Fourier transform infrared (FT-IR) imaging and the distortions that arise when imaging intact, convex domains, using spheres as an extreme example. The competing effects of shallow evanescent wave penetration and blurring due to finite spatial resolution meant that spheres within the range 20-140 microm all appeared to be approximately the same size ( approximately 30-35 microm) when imaged with a numerical aperture (NA) of approximately 0.2. A very simple model was developed that predicted this extreme insensitivity to particle size. On the basis of these studies, it is anticipated that ATR imaging at this NA will be insensitive to the size of intact highly convex objects. A higher numerical aperture device should give a better estimate of the size of small spheres, owing to superior spatial resolution, but large spheres should still appear undersized due to the shallow sampling depth. An estimate of the point spread function (PSF) was required in order to develop and apply the model. The PSF was measured by imaging a sharp interface; assuming an Airy profile, the PSF width (distance from central maximum to first minimum) was estimated to be approximately 20 and 30 microm for IR bands at 1600 and 1000 cm(-1), respectively. This work has two significant limitations. First, underestimation of domain size only arises when imaging intact convex objects; if surfaces are prepared that randomly and representatively section through domains, the images can be analyzed to calculate parameters such as domain size, area, and volume. Second, the model ignores reflection and refraction and assumes weak absorption; hence, the predicted intensity profiles are not expected to be accurate; they merely give a rough estimate of the apparent sphere size. Much further work is required to place the field of quantitative ATR-FT-IR imaging on a sound basis.
Samples in applied psychology: over a decade of research in review.
Shen, Winny; Kiger, Thomas B; Davies, Stacy E; Rasch, Rena L; Simon, Kara M; Ones, Deniz S
2011-09-01
This study examines sample characteristics of articles published in Journal of Applied Psychology (JAP) from 1995 to 2008. At the individual level, the overall median sample size over the period examined was approximately 173, which is generally adequate for detecting the average magnitude of effects of primary interest to researchers who publish in JAP. Samples using higher units of analyses (e.g., teams, departments/work units, and organizations) had lower median sample sizes (Mdn ≈ 65), yet were arguably robust given typical multilevel design choices of JAP authors despite the practical constraints of collecting data at higher units of analysis. A substantial proportion of studies used student samples (~40%); surprisingly, median sample sizes for student samples were smaller than working adult samples. Samples were more commonly occupationally homogeneous (~70%) than occupationally heterogeneous. U.S. and English-speaking participants made up the vast majority of samples, whereas Middle Eastern, African, and Latin American samples were largely unrepresented. On the basis of study results, recommendations are provided for authors, editors, and readers, which converge on 3 themes: (a) appropriateness and match between sample characteristics and research questions, (b) careful consideration of statistical power, and (c) the increased popularity of quantitative synthesis. Implications are discussed in terms of theory building, generalizability of research findings, and statistical power to detect effects. PsycINFO Database Record (c) 2011 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.
2016-09-01
Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.
Dempah, Kassibla Elodie; Lubach, Joseph W; Munson, Eric J
2017-03-06
A variety of particle sizes of a model compound, dicumarol, were prepared and characterized in order to investigate the correlation between particle size and solid-state NMR (SSNMR) proton spin-lattice relaxation ( 1 H T 1 ) times. Conventional laser diffraction and scanning electron microscopy were used as particle size measurement techniques and showed crystalline dicumarol samples with sizes ranging from tens of micrometers to a few micrometers. Dicumarol samples were prepared using both bottom-up and top-down particle size control approaches, via antisolvent microprecipitation and cryogrinding. It was observed that smaller particles of dicumarol generally had shorter 1 H T 1 times than larger ones. Additionally, cryomilled particles had the shortest 1 H T 1 times encountered (8 s). SSNMR 1 H T 1 times of all the samples were measured and showed as-received dicumarol to have a T 1 of 1500 s, whereas the 1 H T 1 times of the precipitated samples ranged from 20 to 80 s, with no apparent change in the physical form of dicumarol. Physical mixtures of different sized particles were also analyzed to determine the effect of sample inhomogeneity on 1 H T 1 values. Mixtures of cryoground and as-received dicumarol were clearly inhomogeneous as they did not fit well to a one-component relaxation model, but could be fit much better to a two-component model with both fast-and slow-relaxing regimes. Results indicate that samples of crystalline dicumarol containing two significantly different particle size populations could be deconvoluted solely based on their differences in 1 H T 1 times. Relative populations of each particle size regime could also be approximated using two-component fitting models. Using NMR theory on spin diffusion as a reference, and taking into account the presence of crystal defects, a model for the correlation between the particle size of dicumarol and its 1 H T 1 time was proposed.
Dunbar, R I M; MacCarron, Padraig; Robertson, Cole
2018-03-01
Group-living offers both benefits (protection against predators, access to resources) and costs (increased ecological competition, the impact of group size on fertility). Here, we use cluster analysis to detect natural patternings in a comprehensive sample of baboon groups, and identify a geometric sequence with peaks at approximately 20, 40, 80 and 160. We suggest (i) that these form a set of demographic oscillators that set habitat-specific limits to group size and (ii) that the oscillator arises from a trade-off between female fertility and predation risk. © 2018 The Authors.
Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu
2016-12-21
A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with amore » test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.« less
Passive vs. Parachute System Architecture for Robotic Sample Return Vehicles
NASA Technical Reports Server (NTRS)
Maddock, Robert W.; Henning, Allen B.; Samareh, Jamshid A.
2016-01-01
The Multi-Mission Earth Entry Vehicle (MMEEV) is a flexible vehicle concept based on the Mars Sample Return (MSR) EEV design which can be used in the preliminary sample return mission study phase to parametrically investigate any trade space of interest to determine the best entry vehicle design approach for that particular mission concept. In addition to the trade space dimensions often considered (e.g. entry conditions, payload size and mass, vehicle size, etc.), the MMEEV trade space considers whether it might be more beneficial for the vehicle to utilize a parachute system during descent/landing or to be fully passive (i.e. not use a parachute). In order to evaluate this trade space dimension, a simplified parachute system model has been developed based on inputs such as vehicle size/mass, payload size/mass and landing requirements. This model works in conjunction with analytical approximations of a mission trade space dataset provided by the MMEEV System Analysis for Planetary EDL (M-SAPE) tool to help quantify the differences between an active (with parachute) and a passive (no parachute) vehicle concept.
Automated sample exchange and tracking system for neutron research at cryogenic temperatures
NASA Astrophysics Data System (ADS)
Rix, J. E.; Weber, J. K. R.; Santodonato, L. J.; Hill, B.; Walker, L. M.; McPherson, R.; Wenzel, J.; Hammons, S. E.; Hodges, J.; Rennich, M.; Volin, K. J.
2007-01-01
An automated system for sample exchange and tracking in a cryogenic environment and under remote computer control was developed. Up to 24 sample "cans" per cycle can be inserted and retrieved in a programed sequence. A video camera acquires a unique identification marked on the sample can to provide a record of the sequence. All operations are coordinated via a LABVIEW™ program that can be operated locally or over a network. The samples are contained in vanadium cans of 6-10mm in diameter and equipped with a hermetically sealed lid that interfaces with the sample handler. The system uses a closed-cycle refrigerator (CCR) for cooling. The sample was delivered to a precooling location that was at a temperature of ˜25K, after several minutes, it was moved onto a "landing pad" at ˜10K that locates the sample in the probe beam. After the sample was released onto the landing pad, the sample handler was retracted. Reading the sample identification and the exchange operation takes approximately 2min. The time to cool the sample from ambient temperature to ˜10K was approximately 7min including precooling time. The cooling time increases to approximately 12min if precooling is not used. Small differences in cooling rate were observed between sample materials and for different sample can sizes. Filling the sample well and the sample can with low pressure helium is essential to provide heat transfer and to achieve useful cooling rates. A resistive heating coil can be used to offset the refrigeration so that temperatures up to ˜350K can be accessed and controlled using a proportional-integral-derivative control loop. The time for the landing pad to cool to ˜10K after it has been heated to ˜240K was approximately 20min.
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
NASA Technical Reports Server (NTRS)
Davis, S. H.; Kissinger, L. D.
1978-01-01
The effect of humidity on the CO2 removal efficiency of small beds of anhydrous LiOH has been studied. Experimental data taken in this small bed system clearly show that there is an optimum humidity for beds loaded with LiOH from a single lot. The CO2 efficiency falls rapidly under dry conditions, but this behavior is approximately the same in all samples. The behavior of the bed under wet conditions is quite dependent on material size distribution. The presence of large particles in a sample can lead to rapid fall off in the CO2 efficiency as the humidity increases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Öztürk, Hande; Noyan, I. Cevdet
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
Öztürk, Hande; Noyan, I. Cevdet
2017-08-24
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
Laboratory theory and methods for sediment analysis
Guy, Harold P.
1969-01-01
The diverse character of fluvial sediments makes the choice of laboratory analysis somewhat arbitrary and the pressing of sediment samples difficult. This report presents some theories and methods used by the Water Resources Division for analysis of fluvial sediments to determine the concentration of suspended-sediment samples and the particle-size distribution of both suspended-sediment and bed-material samples. Other analyses related to these determinations may include particle shape, mineral content, and specific gravity, the organic matter and dissolved solids of samples, and the specific weight of soils. The merits and techniques of both the evaporation and filtration methods for concentration analysis are discussed. Methods used for particle-size analysis of suspended-sediment samples may include the sieve pipet, the VA tube-pipet, or the BW tube-VA tube depending on the equipment available, the concentration and approximate size of sediment in the sample, and the settling medium used. The choice of method for most bed-material samples is usually limited to procedures suitable for sand or to some type of visual analysis for large sizes. Several tested forms are presented to help insure a well-ordered system in the laboratory to handle the samples, to help determine the kind of analysis required for each, to conduct the required processes, and to assist in the required computations. Use of the manual should further 'standardize' methods of fluvial sediment analysis among the many laboratories and thereby help to achieve uniformity and precision of the data.
Drew, L.J.; Attanasi, E.D.; Schuenemeyer, J.H.
1988-01-01
If observed oil and gas field size distributions are obtained by random samplings, the fitted distributions should approximate that of the parent population of oil and gas fields. However, empirical evidence strongly suggests that larger fields tend to be discovered earlier in the discovery process than they would be by random sampling. Economic factors also can limit the number of small fields that are developed and reported. This paper examines observed size distributions in state and federal waters of offshore Texas. Results of the analysis demonstrate how the shape of the observable size distributions change with significant hydrocarbon price changes. Comparison of state and federal observed size distributions in the offshore area shows how production cost differences also affect the shape of the observed size distribution. Methods for modifying the discovery rate estimation procedures when economic factors significantly affect the discovery sequence are presented. A primary conclusion of the analysis is that, because hydrocarbon price changes can significantly affect the observed discovery size distribution, one should not be confident about inferring the form and specific parameters of the parent field size distribution from the observed distributions. ?? 1988 International Association for Mathematical Geology.
Effect of particle size of Martian dust on the degradation of photovoltaic cell performance
NASA Technical Reports Server (NTRS)
Gaier, James R.; Perez-Davis, Marla E.
1991-01-01
Glass coverglass and SiO2 covered and uncovered silicon photovoltaic (PV) cells were subjected to conditions simulating a Mars dust storm, using the Martian Surface Wind Tunnel, to assess the effect of particle size on the performance of PV cells in the Martian environment. The dust used was an artificial mineral of the approximate elemental composition of Martian soil, which was sorted into four different size ranges. Samples were tested both initially clean and initially dusted. The samples were exposed to clear and dust laden winds, wind velocities varying from 23 to 116 m/s, and attack angles from 0 to 90 deg. It was found that transmittance through the coverglass approximates the power produced by a dusty PV cell. Occultation by the dust was found to dominate the performance degradation for wind velocities below 50 m/s, whereas abrasion dominates the degradation at wind velocities above 85 m/s. Occultation is most severe at 0 deg (parallel to the wind), is less pronounced from 22.5 to 67.5 deg, and is somewhat larger at 90 deg (perpendicular to the wind). Abrasion is negligible at 0 deg, and increases to a maximum at 90 deg. Occultation is more of a problem with small particles, whereas large particles (unless they are agglomerates) cause more abrasion.
Yeung, Dit-Yan; Chang, Hong; Dai, Guang
2008-11-01
In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.
Reverse-transformation austenite structure control with micro/nanometer size
NASA Astrophysics Data System (ADS)
Wu, Hui-bin; Niu, Gang; Wu, Feng-juan; Tang, Di
2017-05-01
To control the reverse-transformation austenite structure through manipulation of the micro/nanometer grain structure, the influences of cold deformation and annealing parameters on the microstructure evolution and mechanical properties of 316L austenitic stainless steel were investigated. The samples were first cold-rolled, and then samples deformed to different extents were annealed at different temperatures. The microstructure evolutions were analyzed by optical microscopy, scanning electron microscopy (SEM), magnetic measurements, and X-ray diffraction (XRD); the mechanical properties are also determined by tensile tests. The results showed that the fraction of stain-induced martensite was approximately 72% in the 90% cold-rolled steel. The micro/nanometric microstructure was obtained after reversion annealing at 820-870°C for 60 s. Nearly 100% reversed austenite was obtained in samples annealed at 850°C, where grains with a diameter ≤ 500 nm accounted for 30% and those with a diameter > 0.5 μm accounted for 70%. The micro/nanometer-grain steel exhibited not only a high strength level (approximately 959 MPa) but also a desirable elongation of approximately 45%.
Pedersen, S N; Lindholst, C
1999-12-09
Extraction methods were developed for quantification of the xenoestrogens 4-tert.-octylphenol (tOP) and bisphenol A (BPA) in water and in liver and muscle tissue from the rainbow trout (Oncorhynchus mykiss). The extraction of tOP and BPA from tissue samples was carried out using microwave-assisted solvent extraction (MASE) followed by solid-phase extraction (SPE). Water samples were extracted using only SPE. For the quantification of tOP and BPA, liquid chromatography mass spectrometry (LC-MS) equipped with an atmospheric pressure chemical ionisation interface (APCI) was applied. The combined methods for tissue extraction allow the use of small sample amounts of liver or muscle (typically 1 g), low volumes of solvent (20 ml), and short extraction times (25 min). Limits of quantification of tOP in tissue samples were found to be approximately 10 ng/g in muscle and 50 ng/g in liver (both based on 1 g of fresh tissue). The corresponding values for BPA were approximately 50 ng/g in both muscle and liver tissue. In water, the limit of quantification for tOP and BPA was approximately 0.1 microg/l (based on 100 ml sample size).
On the Power of Multivariate Latent Growth Curve Models to Detect Correlated Change
ERIC Educational Resources Information Center
Hertzog, Christopher; Lindenberger, Ulman; Ghisletta, Paolo; Oertzen, Timo von
2006-01-01
We evaluated the statistical power of single-indicator latent growth curve models (LGCMs) to detect correlated change between two variables (covariance of slopes) as a function of sample size, number of longitudinal measurement occasions, and reliability (measurement error variance). Power approximations following the method of Satorra and Saris…
76 FR 2442 - Reports, Forms, and Record Keeping Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-13
... interviews over a period of approximately 26 months. The survey will ask questions about drinking behavior... wave of telephone interviews with residents in 3 program sites and 2 comparison sites not carrying out... will be 1,200 while sample size for the comparisons sites will be 500, totaling 23,000 interviews...
NASA Astrophysics Data System (ADS)
Vasoya, Manish; Unni, Aparna Beena; Leblond, Jean-Baptiste; Lazarus, Veronique; Ponson, Laurent
2016-04-01
Crack pinning by heterogeneities is a central toughening mechanism in the failure of brittle materials. So far, most analytical explorations of the crack front deformation arising from spatial variations of fracture properties have been restricted to weak toughness contrasts using first order approximation and to defects of small dimensions with respect to the sample size. In this work, we investigate the non-linear effects arising from larger toughness contrasts by extending the approximation to the second order, while taking into account the finite sample thickness. Our calculations predict the evolution of a planar crack lying on the mid-plane of a plate as a function of material parameters and loading conditions, especially in the case of a single infinitely elongated obstacle. Peeling experiments are presented which validate the approach and evidence that the second order term broadens its range of validity in terms of toughness contrast values. The work highlights the non-linear response of the crack front to strong defects and the central role played by the thickness of the specimen on the pinning process.
Prediction of Active-Region CME Productivity from Magnetograms
NASA Technical Reports Server (NTRS)
Falconer, D. A.; Moore, R. L.; Gary, G. A.
2004-01-01
We report results of an expanded evaluation of whole-active-region magnetic measures as predictors of active-region coronal mass ejection (CME) productivity. Previously, in a sample of 17 vector magnetograms of 12 bipolar active regions observed by the Marshall Space Flight Center (MSFC) vector magnetograph, from each magnetogram we extracted a measure of the size of the active region (the active region s total magnetic flux a) and four measures of the nonpotentiality of the active region: the strong-shear length L(sub SS), the strong-gradient length L(sub SG), the net vertical electric current I(sub N), and the net-current magnetic twist parameter alpha (sub IN). This sample size allowed us to show that each of the four nonpotentiality measures was statistically significantly correlated with active-region CME productivity in time windows of a few days centered on the day of the magnetogram. We have now added a fifth measure of active-region nonpotentiality (the best-constant-alpha magnetic twist parameter (alpha sub BC)), and have expanded the sample to 36 MSFC vector magnetograms of 31 bipolar active regions. This larger sample allows us to demonstrate statistically significant correlations of each of the five nonpotentiality measures with future CME productivity, in time windows of a few days starting from the day of the magnetogram. The two magnetic twist parameters (alpha (sub 1N) and alpha (sub BC)) are normalized measures of an active region s nonpotentially in that they do not depend directly on the size of the active region, while the other three nonpotentiality measures (L(sub SS), L(sub SG), and I(sub N)) are non-normalized measures in that they do depend directly on active-region size. We find (1) Each of the five nonpotentiality measures is statistically significantly correlated (correlation confidence level greater than 95%) with future CME productivity and has a CME prediction success rate of approximately 80%. (2) None of the nonpotentiality measures is a significantly better CME predictor than the others. (3) The active-region phi shows some correlation with CME productivity, but well below a statistically significant level (correlation confidence level less than approximately 80%; CME prediction success rate less than approximately 65%). (4) In addition to depending on magnetic twist, CME productivity appears to have some direct dependence on active-region size (rather than only an indirect dependence through a correlation of magnetic twist with active-region size), but it will take a still larger sample of active regions (50 or more) to certify this. (5) Of the five nonpotentiality measures, L(sub SG) appears to be the best for operational CME forecasting because it is as good or better a CME predictor than the others and it alone does not require a vector magnetogram; L(sub SG) can be measured from a line-of-sight magnetogram such as from the Michelson Doppler Imager (MDI) on the Solar and Heliospheric Observatory (SOHO).
Jones, Hayley E.; Martin, Richard M.; Lewis, Sarah J.; Higgins, Julian P.T.
2017-01-01
Abstract Meta‐analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1‐sided P value and a total sample size from each study (or equivalently a 2‐sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta‐analyses, allowing for comparison of results, and an example from when a meta‐analysis was not possible. PMID:28453179
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
Effect of separate sampling on classification accuracy.
Shahrokh Esfahani, Mohammad; Dougherty, Edward R
2014-01-15
Measurements are commonly taken from two phenotypes to build a classifier, where the number of data points from each class is predetermined, not random. In this 'separate sampling' scenario, the data cannot be used to estimate the class prior probabilities. Moreover, predetermined class sizes can severely degrade classifier performance, even for large samples. We employ simulations using both synthetic and real data to show the detrimental effect of separate sampling on a variety of classification rules. We establish propositions related to the effect on the expected classifier error owing to a sampling ratio different from the population class ratio. From these we derive a sample-based minimax sampling ratio and provide an algorithm for approximating it from the data. We also extend to arbitrary distributions the classical population-based Anderson linear discriminant analysis minimax sampling ratio derived from the discriminant form of the Bayes classifier. All the codes for synthetic data and real data examples are written in MATLAB. A function called mmratio, whose output is an approximation of the minimax sampling ratio of a given dataset, is also written in MATLAB. All the codes are available at: http://gsp.tamu.edu/Publications/supplementary/shahrokh13b.
Thanh Noi, Phan; Kappas, Martin
2017-01-01
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets. PMID:29271909
Thanh Noi, Phan; Kappas, Martin
2017-12-22
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km² within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.
Fabrication and properties of (TbxY1-x)3Al5O12 transparent ceramics by hot isostatic pressing
NASA Astrophysics Data System (ADS)
Duan, Pingping; Liu, Peng; Xu, Xiaodong; Wang, Wei; Wan, Zhong; Zhang, Shouyi; Wang, Yinzhen; Zhang, Jian
2017-10-01
(TbxY1-x)3Al5O12 (x = 0.2, 0.5, 0.8) transparent ceramics were synthesized by a solid-state reaction and HIP. All the samples were pre-sintered at 1650 °C for 4 h in a muffle and later HIPed at 1650 °C for 3 h. (Tb0.2Y0.8)3Al5O12 transparent ceramics exhibit best microstructure with an average grain size of approximately 5.22 μm and optical transmittance of over 65% in the region of 500-1600 nm. Additionally, average grain sizes of all the samples are less than 10 μm. XRD scanning patterns indicate that only the (Tb0.8Y0.2)3Al5O12 samples have little secondary phases.
Cartagena, Alvaro; Bakhshandeh, Azam; Ekstrand, Kim Rud
2018-02-07
With this in vitro study we aimed to assess the possibility of precise application of sealant on accessible artificial white spot lesions (WSL) on approximal surfaces next to a tooth surface under operative treatment. A secondary aim was to evaluate whether the use of magnifying glasses improved the application precision. Fifty-six extracted premolars were selected, approximal WSL lesions were created with 15% HCl gel and standardized photographs were taken. The premolars were mounted in plaster-models in contact with a neighbouring molar with Class II/I-II restoration (Sample 1) or approximal, cavitated dentin lesion (Sample 2). The restorations or the lesion were removed, and Clinpro Sealant was placed over the WSL. Magnifying glasses were used when sealing half the study material. The sealed premolar was removed from the plaster-model and photographed. Adobe Photoshop was used to measure the size of WSL and sealed area. The degree of match between the areas was determined in Photoshop. Interclass agreement for WSL, sealed, and matched areas were found as excellent (κ = 0.98-0.99). The sealant covered 48-100% of the WSL-area (median = 93%) in Sample 1 and 68-100% of the WSL-area (median = 95%) in Sample 2. No statistical differences were observed concerning uncovered proportions of the WSL-area between groups with and without using magnifying glasses (p values ≥ .19). However, overextended sealed areas were more pronounced when magnification was used (p = .01). The precision did not differ between the samples (p = .31). It was possible to seal accessible approximal lesions with high precision. Use of magnifying glasses did not improve the precision.
Stress dependence of microstructures in experimentally deformed calcite
NASA Astrophysics Data System (ADS)
Platt, John P.; De Bresser, J. H. P.
2017-12-01
Optical measurements of microstructural features in experimentally deformed Carrara marble help define their dependence on stress. These features include dynamically recrystallized grain size (Dr), subgrain size (Sg), minimum bulge size (Lρ), and the maximum scale length for surface-energy driven grain-boundary migration (Lγ). Taken together with previously published data Dr defines a paleopiezometer over the range 15-291 MPa and temperature over the range 500-1000 °C, with a stress exponent of -1.09 (CI -1.27 to -0.95), showing no detectable dependence on temperature. Sg and Dr measured in the same samples are closely similar in size, suggesting that the new grains did not grow significantly after nucleation. Lρ and Lγ measured on each sample define a relationship to stress with an exponent of approximately -1.6, which helps define the boundary between a region of dominant strain-energy-driven grain-boundary migration at high stress, from a region of dominant surface-energy-driven grain-boundary migration at low stress.
Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E
2012-03-01
In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Experimental measurement of the plasma conductivity of Z93 and Z93P thermal control paint
NASA Technical Reports Server (NTRS)
Hillard, G. Barry
1993-01-01
Two samples each of Z93 and Z93P thermal control paint were exposed to a simulated space environment in a plasma chamber. The samples were biased through a series of voltages ranging from -200 volts to +300 volts and electron and ion currents measured. By comparing the currents to those of pure metal samples of the same size and shape, the conductivity of the samples was calculated. Measured conductivity was dependent on the bias potential in all cases. For Z93P, conductivity was approximately constant over much of the bias range and we find a value of 0.5 micro-mhos per square meter for both electron and ion current. For Z93, the dependence on bias was much more pronounced but conductivity can be said to be approximately one order of magnitude larger. In addition to presenting these results, this report documents all of the experimental data as well as the statistical analyses performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papelis, Charalambos; Um, Wooyong; Russel, Charles E.
2003-03-28
The specific surface area of natural and manmade solid materials is a key parameter controlling important interfacial processes in natural environments and engineered systems, including dissolution reactions and sorption processes at solid-fluid interfaces. To improve our ability to quantify the release of trace elements trapped in natural glasses, the release of hazardous compounds trapped in manmade glasses, or the release of radionuclides from nuclear melt glass, we measured the specific surface area of natural and manmade glasses as a function of particle size, morphology, and composition. Volcanic ash, volcanic tuff, tektites, obsidian glass, and in situ vitrified rock were analyzed.more » Specific surface area estimates were obtained using krypton as gas adsorbent and the BET model. The range of surface areas measured exceeded three orders of magnitude. A tektite sample had the highest surface area (1.65 m2/g), while one of the samples of in situ vitrified rock had the lowest surf ace area (0.0016 m2/g). The specific surface area of the samples was a function of particle size, decreasing with increasing particle size. Different types of materials, however, showed variable dependence on particle size, and could be assigned to one of three distinct groups: (1) samples with low surface area dependence on particle size and surface areas approximately two orders of magnitude higher than the surface area of smooth spheres of equivalent size. The specific surface area of these materials was attributed mostly to internal porosity and surface roughness. (2) samples that showed a trend of decreasing surface area dependence on particle size as the particle size increased. The minimum specific surface area of these materials was between 0.1 and 0.01 m2/g and was also attributed to internal porosity and surface roughness. (3) samples whose surface area showed a monotonic decrease with increasing particle size, never reaching an ultimate surface area limit within the particle size range examined. The surface area results were consistent with particle morphology, examined by scanning electron microscopy, and have significant implications for the release of radionuclides and toxic metals in the environment.« less
“Magnitude-based Inference”: A Statistical Review
Welsh, Alan H.; Knight, Emma J.
2015-01-01
ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387
Shamey, Renzo; Zubair, Muhammad; Cheema, Hammad
2015-08-01
The aim of this study was twofold, first to determine the effect of field view size and second of illumination conditions on the selection of unique hue samples (UHs: R, Y, G and B) from two rotatable trays, each containing forty highly chromatic Natural Color System (NCS) samples, on one tray corresponding to 1.4° and on the other to 5.7° field of view size. UH selections were made by 25 color-normal observers who repeated assessments three times with a gap of at least 24h between trials. Observers separately assessed UHs under four illumination conditions simulating illuminants D65, A, F2 and F11. An apparent hue shift (statistically significant for UR) was noted for UH selections at 5.7° field of view compared to those at 1.4°. Observers' overall variability was found to be higher for UH stimuli selections at the larger field of view. Intra-observer variability was found to be approximately 18.7% of inter-observer variability in selection of samples for both sample sizes. The highest intra-observer variability was under simulated illuminant D65, followed by A, F11, and F2. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yamada, K.; Suzuki, H.; Kitahata, H.; Matsushita, Y.; Nozawa, K.; Komori, F.; Yu, R. S.; Kobayashi, Y.; Ohdaira, T.; Oshima, N.; Suzuki, R.; Takagiwa, Y.; Kimura, K.; Kanazawa, I.
2018-01-01
The size of structural vacancies and structural vacancy density of 1/1-Al-Re-Si approximant crystals with different Re compositions were evaluated by positron annihilation lifetime and Doppler broadening measurements. Incident positrons were found to be trapped at the monovacancy-size open space surrounded by Al atoms. From a previous analysis using the maximum entropy method and Rietveld method, such an open space is shown to correspond to the centre of Al icosahedral clusters, which locates at the vertex and body centre. The structural vacancy density of non-metallic Al73Re17Si10 was larger than that of metallic Al73Re15Si12. The observed difference in the structural vacancy density reflects that in bonding nature and may explain that in the physical properties of the two samples.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Ringham, Brandy M; Kreidler, Sarah M; Muller, Keith E; Glueck, Deborah H
2016-07-30
Multilevel and longitudinal studies are frequently subject to missing data. For example, biomarker studies for oral cancer may involve multiple assays for each participant. Assays may fail, resulting in missing data values that can be assumed to be missing completely at random. Catellier and Muller proposed a data analytic technique to account for data missing at random in multilevel and longitudinal studies. They suggested modifying the degrees of freedom for both the Hotelling-Lawley trace F statistic and its null case reference distribution. We propose parallel adjustments to approximate power for this multivariate test in studies with missing data. The power approximations use a modified non-central F statistic, which is a function of (i) the expected number of complete cases, (ii) the expected number of non-missing pairs of responses, or (iii) the trimmed sample size, which is the planned sample size reduced by the anticipated proportion of missing data. The accuracy of the method is assessed by comparing the theoretical results to the Monte Carlo simulated power for the Catellier and Muller multivariate test. Over all experimental conditions, the closest approximation to the empirical power of the Catellier and Muller multivariate test is obtained by adjusting power calculations with the expected number of complete cases. The utility of the method is demonstrated with a multivariate power analysis for a hypothetical oral cancer biomarkers study. We describe how to implement the method using standard, commercially available software products and give example code. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ding, Chenliang; Wei, Jingsong; Xiao, Mufei
2018-05-01
We herein propose a far-field super-resolution imaging with metal thin films based on the temperature-dependent electron-phonon collision frequency effect. In the proposed method, neither fluorescence labeling nor any special properties are required for the samples. The 100 nm lands and 200 nm grooves on the Blu-ray disk substrates were clearly resolved and imaged through a laser scanning microscope of wavelength 405 nm. The spot size was approximately 0.80 μm , and the imaging resolution of 1/8 of the laser spot size was experimentally obtained. This work can be applied to the far-field super-resolution imaging of samples with neither fluorescence labeling nor any special properties.
ELIPGRID-PC: A PC program for calculating hot spot probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, J.R.
1994-10-01
ELIPGRID-PC, a new personal computer program has been developed to provide easy access to Singer`s 1972 ELIPGRID algorithm for hot-spot detection probabilities. Three features of the program are the ability to determine: (1) the grid size required for specified conditions, (2) the smallest hot spot that can be sampled with a given probability, and (3) the approximate grid size resulting from specified conditions and sampling cost. ELIPGRID-PC also provides probability of hit versus cost data for graphing with spread-sheets or graphics software. The program has been successfully tested using Singer`s published ELIPGRID results. An apparent error in the original ELIPGRIDmore » code has been uncovered and an appropriate modification incorporated into the new program.« less
Fraley, R. Chris; Vazire, Simine
2014-01-01
The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159
Experimental evaluation of the effect of winter feeding on channel catfish growout pond plankton
USDA-ARS?s Scientific Manuscript database
Ten, 0.25 acre ponds at the UAPB Aquaculture Station were sampled weekly from Dec. 7-Feb. 22 (n=90) for phytoplankton and zooplankton. Five of the ponds were randomly assigned to each of two treatments: no feeding and feeding based on recommended rates. Channel catfish sizes and numbers approximated...
Numerical distance effect size is a poor metric of approximate number system acuity.
Chesney, Dana
2018-04-12
Individual differences in the ability to compare and evaluate nonsymbolic numerical magnitudes-approximate number system (ANS) acuity-are emerging as an important predictor in many research areas. Unfortunately, recent empirical studies have called into question whether a historically common ANS-acuity metric-the size of the numerical distance effect (NDE size)-is an effective measure of ANS acuity. NDE size has been shown to frequently yield divergent results from other ANS-acuity metrics. Given these concerns and the measure's past popularity, it behooves us to question whether the use of NDE size as an ANS-acuity metric is theoretically supported. This study seeks to address this gap in the literature by using modeling to test the basic assumption underpinning use of NDE size as an ANS-acuity metric: that larger NDE size indicates poorer ANS acuity. This assumption did not hold up under test. Results demonstrate that the theoretically ideal relationship between NDE size and ANS acuity is not linear, but rather resembles an inverted J-shaped distribution, with the inflection points varying based on precise NDE task methodology. Thus, depending on specific methodology and the distribution of ANS acuity in the tested population, positive, negative, or null correlations between NDE size and ANS acuity could be predicted. Moreover, peak NDE sizes would be found for near-average ANS acuities on common NDE tasks. This indicates that NDE size has limited and inconsistent utility as an ANS-acuity metric. Past results should be interpreted on a case-by-case basis, considering both specifics of the NDE task and expected ANS acuity of the sampled population.
Ripple, Dean C; Montgomery, Christopher B; Hu, Zhishang
2015-02-01
Accurate counting and sizing of protein particles has been limited by discrepancies of counts obtained by different methods. To understand the bias and repeatability of techniques in common use in the biopharmaceutical community, the National Institute of Standards and Technology has conducted an interlaboratory comparison for sizing and counting subvisible particles from 1 to 25 μm. Twenty-three laboratories from industry, government, and academic institutions participated. The circulated samples consisted of a polydisperse suspension of abraded ethylene tetrafluoroethylene particles, which closely mimic the optical contrast and morphology of protein particles. For restricted data sets, agreement between data sets was reasonably good: relative standard deviations (RSDs) of approximately 25% for light obscuration counts with lower diameter limits from 1 to 5 μm, and approximately 30% for flow imaging with specified manufacturer and instrument setting. RSDs of the reported counts for unrestricted data sets were approximately 50% for both light obscuration and flow imaging. Differences between instrument manufacturers were not statistically significant for light obscuration but were significant for flow imaging. We also report a method for accounting for differences in the reported diameter for flow imaging and electrical sensing zone techniques; the method worked well for diameters greater than 15 μm. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua
2016-03-01
This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.
Maximizing return on socioeconomic investment in phase II proof-of-concept trials.
Chen, Cong; Beckman, Robert A
2014-04-01
Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.
1986-08-01
materials (2.2 w/o and 3.0 w/o MgO). The other two batches (2.8 w/o and 3.1 w/o MgO), of higher purity, were made using E-10 zirconia powder from...CID) powders Two methods have been used for the coprecipitation of doped zirconia powders from solutions of chemical precursors. (4) Method I, for...of powder, approximate sample size 3.2 Kg (6.4 Kg for zirconia powder ); 342 3. Random selection of sample; 4. Partial drying of sample to reduce caking
Puls, Robert W.; Eychaner, James H.; Powell, Robert M.
1996-01-01
Investigations at Pinal Creek, Arizona, evaluated routine sampling procedures for determination of aqueous inorganic geochemistry and assessment of contaminant transport by colloidal mobility. Sampling variables included pump type and flow rate, collection under air or nitrogen, and filter pore diameter. During well purging and sample collection, suspended particle size and number as well as dissolved oxygen, temperature, specific conductance, pH, and redox potential were monitored. Laboratory analyses of both unfiltered samples and the filtrates were performed by inductively coupled argon plasma, atomic absorption with graphite furnace, and ion chromatography. Scanning electron microscopy with Energy Dispersive X-ray was also used for analysis of filter particulates. Suspended particle counts consistently required approximately twice as long as the other field-monitored indicators to stabilize. High-flow-rate pumps entrained normally nonmobile particles. Difference in elemental concentrations using different filter-pore sizes were generally not large with only two wells having differences greater than 10 percent in most wells. Similar differences (>10%) were observed for some wells when samples were collected under nitrogen rather than in air. Fe2+/Fe3+ ratios for air-collected samples were smaller than for samples collected under a nitrogen atmosphere, reflecting sampling-induced oxidation.
Visscher, Peter M; Goddard, Michael E
2015-01-01
Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.
Stormer, Ame; Tun, Waimar; Guli, Lisa; Harxhi, Arjan; Bodanovskaia, Zinaida; Yakovleva, Anna; Rusakova, Maia; Levina, Olga; Bani, Roland; Rjepaj, Klodian; Bino, Silva
2006-11-01
Injection drug users in Tirana, Albania and St. Petersburg, Russia were recruited into a study assessing HIV-related behaviors and HIV serostatus using Respondent Driven Sampling (RDS), a peer-driven recruitment sampling strategy that results in a probability sample. (Salganik M, Heckathorn DD. Sampling and estimation in hidden populations using respondent-driven sampling. Sociol Method. 2004;34:193-239). This paper presents a comparison of RDS implementation, findings on network and recruitment characteristics, and lessons learned. Initiated with 13 to 15 seeds, approximately 200 IDUs were recruited within 8 weeks. Information resulting from RDS indicates that social network patterns from the two studies differ greatly. Female IDUs in Tirana had smaller network sizes than male IDUs, unlike in St. Petersburg where female IDUs had larger network sizes than male IDUs. Recruitment patterns in each country also differed by demographic categories. Recruitment analyses indicate that IDUs form socially distinct groups by sex in Tirana, whereas there was a greater degree of gender mixing patterns in St. Petersburg. RDS proved to be an effective means of surveying these hard-to-reach populations.
Estimation of the vortex length scale and intensity from two-dimensional samples
NASA Technical Reports Server (NTRS)
Reuss, D. L.; Cheng, W. P.
1992-01-01
A method is proposed for estimating flow features that influence flame wrinkling in reciprocating internal combustion engines, where traditional statistical measures of turbulence are suspect. Candidate methods were tested in a computed channel flow where traditional turbulence measures are valid and performance can be rationally evaluated. Two concepts are tested. First, spatial filtering is applied to the two-dimensional velocity distribution and found to reveal structures corresponding to the vorticity field. Decreasing the spatial-frequency cutoff of the filter locally changes the character and size of the flow structures that are revealed by the filter. Second, vortex length scale and intensity is estimated by computing the ensemble-average velocity distribution conditionally sampled on the vorticity peaks. The resulting conditionally sampled 'average vortex' has a peak velocity less than half the rms velocity and a size approximately equal to the two-point-correlation integral-length scale.
Floating plastic debris in the Central and Western Mediterranean Sea.
Ruiz-Orejón, Luis F; Sardá, Rafael; Ramis-Pujol, Juan
2016-09-01
In two sea voyages throughout the Mediterranean (2011 and 2013) that repeated the historical travels of Archduke Ludwig Salvator of Austria (1847-1915), 71 samples of floating plastic debris were obtained with a Manta trawl. Floating plastic was observed in all the sampled sites, with an average weight concentration of 579.3 g dw km(-2) (maximum value of 9298.2 g dw km(-2)) and an average particle concentration of 147,500 items km(-2) (the maximum concentration was 1,164,403 items km(-2)). The plastic size distribution showed microplastics (<5 mm) in all the samples. The most abundant particles had a surface area of approximately 1 mm(2) (the mesh size was 333 μm). The general estimate obtained was a total value of 1455 tons dw of floating plastic in the entire Mediterranean region, with various potential spatial accumulation areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
Harrison, Sean; Jones, Hayley E; Martin, Richard M; Lewis, Sarah J; Higgins, Julian P T
2017-09-01
Meta-analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1-sided P value and a total sample size from each study (or equivalently a 2-sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta-analyses, allowing for comparison of results, and an example from when a meta-analysis was not possible. Copyright © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.
Wellek, Stefan
2017-02-28
In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Designing Case-Control Studies: Decisions About the Controls
Hodge, Susan E.; Subaran, Ryan L.; Weissman, Myrna M.; Fyer, Abby J.
2014-01-01
The authors quantified, first, the effect of misclassified controls (i.e., individuals who are affected with the disease under study but who are classified as controls) on the ability of a case-control study to detect an association between a disease and a genetic marker, and second, the effect of leaving misclassified controls in the study, as opposed to removing them (thus decreasing sample size). The authors developed an informativeness measure of a study’s ability to identify real differences between cases and controls. They then examined this measure’s behavior when there are no misclassified controls, when there are misclassified controls, and when there were misclassified controls but they have been removed from the study. The results show that if, for example, 10% of controls are misclassified, the study’s informativeness is reduced to approximately 81% of what it would have been in a sample with no misclassified controls, whereas if these misclassified controls are removed from the study, the informativeness is only reduced to about 90%, despite the reduced sample size. If 25% are misclassified, those figures become approximately 56% and 75%, respectively. Thus, leaving the misclassified controls in the control sample is worse than removing them altogether. Finally, the authors illustrate how insufficient power is not necessarily circumvented by having an unlimited number of controls. The formulas provided by the authors enable investigators to make rational decisions about removing misclassified controls or leaving them in. PMID:22854929
Grain-size-induced weakening of H2O ices I and II and associated anisotropic recrystallization
Stern, L.A.; Durham, W.B.; Kirby, S.H.
1997-01-01
Grain-size-dependent flow mechanisms tend to be favored over dislocation creep at low differential stresses and can potentially influence the rheology of low-stress, low-strain rate environments such as those of planetary interiors. We experimentally investigated the effect of reduced grain size on the solid-state flow of water ice I, a principal component of the asthenospheres of many icy moons of the outer solar system, using techniques new to studies of this deformation regime. We fabricated fully dense ice samples of approximate grain size 2 ?? 1 ??m by transforming "standard" ice I samples of 250 ?? 50 ??m grain size to the higher-pressure phase ice II, deforming them in the ice II field, and then rapidly releasing the pressure deep into the ice I stability field. At T ??? 200 K, slow growth and rapid nucleation of ice I combine to produce a fine grain size. Constant-strain rate deformation tests conducted on these samples show that deformation rates are less stress sensitive than for standard ice and that the fine-grained material is markedly weaker than standard ice, particularly during the transient approach to steady state deformation. Scanning electron microscope examination of the deformed fine-grained ice samples revealed an unusual microstructure dominated by platelike grains that grew normal to the compression direction, with c axes preferentially oriented parallel to compression. In samples tested at T ??? 220 K the elongation of the grains is so pronounced that the samples appear finely banded, with aspect ratios of grains approaching 50:1. The anisotropic growth of these crystallographically oriented neoblasts likely contributes to progressive work hardening observed during the transient stage of deformation. We have also documented remarkably similar microstructural development and weak mechanical behavior in fine-grained ice samples partially transformed and deformed in the ice II field.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Selbig, William R.; ,; Roger T. Bannerman,
2011-01-01
A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.
Selbig, William R; Bannerman, Roger T
2011-04-01
A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.
Selbig, W.R.; Bannerman, R.T.
2011-01-01
A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water. ?? 2010 Publishing Technology.
NASA Astrophysics Data System (ADS)
Presley, Marsha A.; Craddock, Robert A.; Zolotova, Natalya
2009-11-01
A line-heat source apparatus was used to measure thermal conductivities of a lightly cemented fluvial sediment (salinity = 1.1 g · kg-1), and the same sample with the cement bonds almost completely disrupted, under low pressure, carbon dioxide atmospheres. The thermal conductivities of the cemented sample were approximately 3× higher, over the range of atmospheric pressures tested, than the thermal conductivities of the same sample after the cement bonds were broken. A thermal conductivity-derived particle size was determined for each sample by comparing these thermal conductivity measurements to previous data that demonstrated the dependence of thermal conductivity on particle size. Actual particle-size distributions were determined via physical separation through brass sieves. When uncemented, 87% of the particles were less than 125 μm in diameter, with 60% of the sample being less than 63 μm in diameter. As much as 35% of the cemented sample was composed of conglomerate particles with diameters greater than 500 μm. The thermal conductivities of the cemented sample were most similar to those of 500-μm glass beads, whereas the thermal conductivities of the uncemented sample were most similar to those of 75-μm glass beads. This study demonstrates that even a small amount of salt cement can significantly increase the thermal conductivity of particulate materials, as predicted by thermal modeling estimates by previous investigators.
Random vs. systematic sampling from administrative databases involving human subjects.
Hagino, C; Lo, R J
1998-09-01
Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.
Thermal conductivity measurements of particulate materials under Martian conditions
NASA Technical Reports Server (NTRS)
Presley, M. A.; Christensen, P. R.
1993-01-01
The mean particle diameter of surficial units on Mars has been approximated by applying thermal inertia determinations from the Mariner 9 Infrared Radiometer and the Viking Infrared Thermal Mapper data together with thermal conductivity measurement. Several studies have used this approximation to characterize surficial units and infer their nature and possible origin. Such interpretations are possible because previous measurements of the thermal conductivity of particulate materials have shown that particle size significantly affects thermal conductivity under martian atmospheric pressures. The transfer of thermal energy due to collisions of gas molecules is the predominant mechanism of thermal conductivity in porous systems for gas pressures above about 0.01 torr. At martian atmospheric pressures the mean free path of the gas molecules becomes greater than the effective distance over which conduction takes place between the particles. Gas particles are then more likely to collide with the solid particles than they are with each other. The average heat transfer distance between particles, which is related to particle size, shape and packing, thus determines how fast heat will flow through a particulate material.The derived one-to-one correspondence of thermal inertia to mean particle diameter implies a certain homogeneity in the materials analyzed. Yet the samples used were often characterized by fairly wide ranges of particle sizes with little information about the possible distribution of sizes within those ranges. Interpretation of thermal inertia data is further limited by the lack of data on other effects on the interparticle spacing relative to particle size, such as particle shape, bimodal or polymodal mixtures of grain sizes and formation of salt cements between grains. To address these limitations and to provide a more comprehensive set of thermal conductivities vs. particle size a linear heat source apparatus, similar to that of Cremers, was assembled to provide a means of measuring the thermal conductivity of particulate samples. In order to concentrate on the dependence of the thermal conductivity on particle size, initial runs will use spherical glass beads that are precision sieved into relatively small size ranges and thoroughly washed.
NASA Astrophysics Data System (ADS)
Arfaei, Babak
This work examines the nucleation mechanism of Sn in SnAgCu alloys and its effect on the microstructure of those solder joints. The nucleation rate of Sn in a SAC alloy was obtained by simultaneous calorimetric examination of the isothermal solidification of 88 flip chip Sn-Ag-Cu solder joints. Qualitative agreement with classic nucleation theory was observed, although it was concluded that the spherical cap model cannot be applied to explain the structure of nucleus. It was shown that the solidification temperature significantly affects the microstructure; samples that undercooled less than approximately 40oC revealed one or three large Sn grains, while interlaced twinning was observed in the samples that solidified at lower temperatures. In order to better understand the effect of microstructure on the thermomechanical properties of solder joints, a study of the dependence of room temperature shear fatigue lifetime on Sn grain number and orientation was conducted. This study examined the correlations of variations in fatigue life of solder balls with the microstructure of Sn-Ag-Cu solder. The mean fatigue lifetime was found to be significantly longer for samples with multiple Sn grains than for samples with single Sn grains. For single grain samples, correlations between Sn grain orientation (with respect to the loading direction) and lifetime were observed, providing insight on early failures in SnAgCu solder joints. Correlations between the lifetimes of single Sn grained, SAC205 solder joints with differences in Ag3Sn and Cu6Sn5 precipitate microstructures were investigated. It was found that Ag3Sn precipitates were highly segregated from Cu6Sn 5 precipitates on a length scale of approximately twenty microns. Furthermore, large (factor of two) variations of the Sn dendrite arm size were observed within given samples. Such variations in values of dendrite arm size within a single sample were much larger than observed variations of this parameter between individual samples. Few significant differences were observed in the average size of precipitates in different samples. While the earliest and latest lifetimes of single Sn grained samples were correlated with Sn grain orientation, effects of precipitate microstructure on lifetimes were not clearly delineated.
NDA issues with RFETS vitrified waste forms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.; Veazey, G.
1998-12-31
A study was conducted at Los Alamos National Laboratory (LANL) for the purpose of determining the feasibility of using a segmented gamma scanner (SGS) to accurately perform non-destructive analysis (NDA) on certain Rocky Flats Environmental Technology Site (RFETS) vitrified waste samples. This study was performed on a full-scale vitrified ash sample prepared at LANL according to a procedure similar to that anticipated to be used at RFETS. This sample was composed of a borosilicate-based glass frit, blended with ash to produce a Pu content of {approximately}1 wt %. The glass frit was taken to a degree of melting necessary tomore » achieve a full encapsulation of the ash material. The NDA study performed on this sample showed that SGSs with either {1/2}- or 2-inch collimation can achieve an accuracy better than 6 % relative to calorimetry and {gamma}-ray isotopics. This accuracy is achievable, after application of appropriate bias corrections, for transmissions of about {1/2} % through the waste form and counting times of less than 30 minutes. These results are valid for ash material and graphite fines with the same degree of plutonium particle size, homogeneity, sample density, and sample geometry as the waste form used to obtain the results in this study. A drum-sized thermal neutron counter (TNC) was also included in the study to provide an alternative in the event the SGS failed to meet the required level of accuracy. The preliminary indications are that this method will also achieve the required accuracy with counting times of {approximately}30 minutes and appropriate application of bias corrections. The bias corrections can be avoided in all cases if the instruments are calibrated on standards matching the items.« less
Technical note: Alternatives to reduce adipose tissue sampling bias.
Cruz, G D; Wang, Y; Fadel, J G
2014-10-01
Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.
NASA Astrophysics Data System (ADS)
Han, Chengliang; Zhu, Dejie; Wu, Hanzhao; Li, Yao; Cheng, Lu; Hu, Kunhong
2016-06-01
A fast and controllable synthesis method for superparamagnetic magnetite nanoparticles (Fe3O4 NPs) was developed in Fe(III)-triethanolamine (TEA) solution. The phase structure, morphology and particle size of the as-synthesized samples were characterized by X-ray diffraction (XRD) and transmission electron microscopy (TEM). The results showed that the magnetic particles were pure Fe3O4 with mean sizes of approximately 10 nm. The used TEA has key effects on the formation of well dispersing Fe3O4 NPs. Vibrating sample magnetometer (VSM) result indicated that the as-obtained Fe3O4 NPs exhibited superparamagnetic behavior and the saturation magnetization (Ms) was about 70 emu/g, which had potential applications in magnetic science and technology.
Song, Gyuho; Kong, Tai; Dusoe, Keith J.; ...
2018-01-24
Mechanical properties of materials are strongly dependent of their atomic arrangement as well as the sample dimension, particularly at the micrometer length scale. Here in this study, we investigated the small-scale mechanical properties of single-crystalline YCd 6, which is a rational approximant of the icosahedral Y-Cd quasicrystal. In situ microcompression tests revealed that shear localization always occurs on {101} planes, but the shear direction is not constrained to any particular crystallographic directions. Furthermore, the yield strengths show the size dependence with a power law exponent of 0.4. Shear localization on {101} planes and size-dependent yield strength are explained in termsmore » of a large interplanar spacing between {101} planes and the energetics of shear localization process, respectively. The mechanical behavior of the icosahedral Y-Cd quasicrystal is also compared to understand the influence of translational symmetry on the shear localization process in both YCd 6 and Y-Cd quasicrystal micropillars. Finally, the results of this study will provide an important insight in a fundamental understanding of shear localization process in novel complex intermetallic compounds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Gyuho; Kong, Tai; Dusoe, Keith J.
Mechanical properties of materials are strongly dependent of their atomic arrangement as well as the sample dimension, particularly at the micrometer length scale. Here in this study, we investigated the small-scale mechanical properties of single-crystalline YCd 6, which is a rational approximant of the icosahedral Y-Cd quasicrystal. In situ microcompression tests revealed that shear localization always occurs on {101} planes, but the shear direction is not constrained to any particular crystallographic directions. Furthermore, the yield strengths show the size dependence with a power law exponent of 0.4. Shear localization on {101} planes and size-dependent yield strength are explained in termsmore » of a large interplanar spacing between {101} planes and the energetics of shear localization process, respectively. The mechanical behavior of the icosahedral Y-Cd quasicrystal is also compared to understand the influence of translational symmetry on the shear localization process in both YCd 6 and Y-Cd quasicrystal micropillars. Finally, the results of this study will provide an important insight in a fundamental understanding of shear localization process in novel complex intermetallic compounds.« less
Vasylkiv, Oleg; Borodianska, Hanna; Badica, Petre; Grasso, Salvatore; Sakka, Yoshio; Tok, Alfred; Su, Liap Tat; Bosman, Michael; Ma, Jan
2012-02-01
Boron carbide B4C powders were subject to reactive spark plasma sintering (also known as field assisted sintering, pulsed current sintering or plasma assisted sintering) under nitrogen atmosphere. For an optimum hexagonal BN (h-BN) content estimated from X-ray diffraction measurements at approximately 0.4 wt%, the as-prepared BaCb-(BxOy/BN) ceramic shows values of Berkovich and Vickers hardness of 56.7 +/- 3.1 GPa and 39.3 +/- 7.6 GPa, respectively. These values are higher than for the vacuum SPS processed B4C pristine sample and the h-BN -mechanically-added samples. XRD and electronic microscopy data suggest that in the samples produced by reactive SPS in N2 atmosphere, and containing an estimated amount of 0.3-1.5% h-BN, the crystallite size of the boron carbide grains is decreasing with the increasing amount of N2, while for the newly formed lamellar h-BN the crystallite size is almost constant (approximately 30-50 nm). BN is located at the grain boundaries between the boron carbide grains and it is wrapped and intercalated by a thin layer of boron oxide. BxOy/BN forms a fine and continuous 3D mesh-like structure that is a possible reason for good mechanical properties.
Atkinson, Quentin D; Gray, Russell D; Drummond, Alexei J
2008-02-01
The relative timing and size of regional human population growth following our expansion from Africa remain unknown. Human mitochondrial DNA (mtDNA) diversity carries a legacy of our population history. Given a set of sequences, we can use coalescent theory to estimate past population size through time and draw inferences about human population history. However, recent work has challenged the validity of using mtDNA diversity to infer species population sizes. Here we use Bayesian coalescent inference methods, together with a global data set of 357 human mtDNA coding-region sequences, to infer human population sizes through time across 8 major geographic regions. Our estimates of relative population sizes show remarkable concordance with the contemporary regional distribution of humans across Africa, Eurasia, and the Americas, indicating that mtDNA diversity is a good predictor of population size in humans. Plots of population size through time show slow growth in sub-Saharan Africa beginning 143-193 kya, followed by a rapid expansion into Eurasia after the emergence of the first non-African mtDNA lineages 50-70 kya. Outside Africa, the earliest and fastest growth is inferred in Southern Asia approximately 52 kya, followed by a succession of growth phases in Northern and Central Asia (approximately 49 kya), Australia (approximately 48 kya), Europe (approximately 42 kya), the Middle East and North Africa (approximately 40 kya), New Guinea (approximately 39 kya), the Americas (approximately 18 kya), and a second expansion in Europe (approximately 10-15 kya). Comparisons of relative regional population sizes through time suggest that between approximately 45 and 20 kya most of humanity lived in Southern Asia. These findings not only support the use of mtDNA data for estimating human population size but also provide a unique picture of human prehistory and demonstrate the importance of Southern Asia to our recent evolutionary past.
Costs of Food Safety Investments in the Meat and Poultry Slaughter Industries.
Viator, Catherine L; Muth, Mary K; Brophy, Jenna E; Noyes, Gary
2017-02-01
To develop regulations efficiently, federal agencies need to know the costs of implementing various regulatory alternatives. As the regulatory agency responsible for the safety of meat and poultry products, the U.S. Dept. of Agriculture's Food Safety and Inspection Service is interested in the costs borne by meat and poultry establishments. This study estimated the costs of developing, validating, and reassessing hazard analysis and critical control points (HACCP), sanitary standard operating procedures (SSOP), and sampling plans; food safety training for new employees; antimicrobial equipment and solutions; sanitizing equipment; third-party audits; and microbial tests. Using results from an in-person expert consultation, web searches, and contacts with vendors, we estimated capital equipment, labor, materials, and other costs associated with these investments. Results are presented by establishment size (small and large) and species (beef, pork, chicken, and turkey), when applicable. For example, the cost of developing food safety plans, such as HACCP, SSOP, and sampling plans, can range from approximately $6000 to $87000, depending on the type of plan and establishment size. Food safety training costs from approximately $120 to $2500 per employee, depending on the course and type of employee. The costs of third-party audits range from approximately $13000 to $24000 per audit, and establishments are often subject to multiple audits per year. Knowing the cost of these investments will allow researchers and regulators to better assess the effects of food safety regulations and evaluate cost-effective alternatives. © 2017 Institute of Food Technologists®.
NASA Astrophysics Data System (ADS)
Muller, Wayne; Scheuermann, Alexander
2016-04-01
Measuring the electrical permittivity of civil engineering materials is important for a range of ground penetrating radar (GPR) and pavement moisture measurement applications. Compacted unbound granular (UBG) pavement materials present a number of preparation and measurement challenges using conventional characterisation techniques. As an alternative to these methods, a modified free-space (MFS) characterisation approach has previously been investigated. This paper describes recent work to optimise and validate the MFS technique. The research included finite difference time domain (FDTD) modelling to better understand the nature of wave propagation within material samples and the test apparatus. This research led to improvements in the test approach and optimisation of sample sizes. The influence of antenna spacing and sample thickness on the permittivity results was investigated by a series of experiments separating antennas and measuring samples of nylon and water. Permittivity measurements of samples of nylon and water approximately 100 mm and 170 mm thick were also compared, showing consistent results. These measurements also agreed well with surface probe measurements of the nylon sample and literature values for water. The results indicate permittivity estimates of acceptable accuracy can be obtained using the proposed approach, apparatus and sample sizes.
The sample handling system for the Mars Icebreaker Life mission: from dirt to data.
Davé, Arwen; Thompson, Sarah J; McKay, Christopher P; Stoker, Carol R; Zacny, Kris; Paulsen, Gale; Mellerowicz, Bolek; Glass, Brian J; Willson, David; Bonaccorsi, Rosalba; Rask, Jon
2013-04-01
The Mars Icebreaker Life mission will search for subsurface life on Mars. It consists of three payload elements: a drill to retrieve soil samples from approximately 1 m below the surface, a robotic sample handling system to deliver the sample from the drill to the instruments, and the instruments themselves. This paper will discuss the robotic sample handling system. Collecting samples from ice-rich soils on Mars in search of life presents two challenges: protection of that icy soil--considered a "special region" with respect to planetary protection--from contamination from Earth, and delivery of the icy, sticky soil to spacecraft instruments. We present a sampling device that meets these challenges. We built a prototype system and tested it at martian pressure, drilling into ice-cemented soil, collecting cuttings, and transferring them to the inlet port of the SOLID2 life-detection instrument. The tests successfully demonstrated that the Icebreaker drill, sample handling system, and life-detection instrument can collectively operate in these conditions and produce science data that can be delivered via telemetry--from dirt to data. Our results also demonstrate the feasibility of using an air gap to prevent forward contamination. We define a set of six analog soils for testing over a range of soil cohesion, from loose sand to basalt soil, with angles of repose of 27° and 39°, respectively. Particle size is a key determinant of jamming of mechanical parts by soil particles. Jamming occurs when the clearance between moving parts is equal in size to the most common particle size or equal to three of these particles together. Three particles acting together tend to form bridges and lead to clogging. Our experiments show that rotary-hammer action of the Icebreaker drill influences the particle size, typically reducing particle size by ≈ 100 μm.
Presolar Materials in a Giant Cluster IDP of Probable Cometary Origin
NASA Technical Reports Server (NTRS)
Messenger, S.; Brownlee, D. E.; Joswiak, D. J.; Nguyen, A. N.
2015-01-01
Chondritic porous interplanetary dust particles (CP-IDPs) have been linked to comets by their fragile structure, primitive mineralogy, dynamics, and abundant interstellar materials. But differences have emerged between 'cometary' CP-IDPs and comet 81P/Wild 2 Stardust Mission samples. Particles resembling Ca-Al-rich inclusions (CAIs), chondrules, and amoeboid olivine aggregates (AOAs) in Wild 2 samples are rare in CP-IDPs. Unlike IDPs, presolar materials are scarce in Wild 2 samples. These differences may be due to selection effects, such as destruction of fine grained (presolar) components during the 6 km/s aerogel impact collection of Wild 2 samples. Large refractory grains observed in Wild 2 samples are also unlikely to be found in most (less than 30 micrometers) IDPs. Presolar materials provide a measure of primitive-ness of meteorites and IDPs. Organic matter in IDPs and chondrites shows H and N isotopic anomalies attributed to low-T interstellar or protosolar disk chemistry, where the largest anomalies occur in the most primitive samples. Presolar silicates are abundant in meteorites with low levels of aqueous alteration (Acfer 094 approximately 200 ppm) and scarce in altered chondrites (e.g. Semarkona approximately 20 ppm). Presolar silicates in minimally altered CP-IDPs range from approximately 400 ppm to 15,000 ppm, possibly reflecting variable levels of destruction in the solar nebula or statistical variations due to small sample sizes. Here we present preliminary isotopic and mineralogical studies of a very large CP-IDP. The goals of this study are to more accurately determine the abundances of presolar components of CP-IDP material for comparison with comet Wild 2 samples and meteorites. The large mass of this IDP presents a unique opportunity to accurately determine the abundance of pre-solar grains in a likely cometary sample.
Imaging inert fluorinated gases in cracks: perhaps in David's ankles.
Kuethe, Dean O; Scholz, Markus D; Fantazzini, Paola
2007-05-01
Inspired by the challenge of determining the nature of cracks on the ankles of Michelangelo's statue David, we discovered that one can image SF(6) gas in cracks in marble samples with alacrity. The imaging method produces images of gas with a signal-to-noise ratio (SNR) of 100-250, which is very high for magnetic resonance imaging (MRI) in general, let alone for an image of a gas at thermal equilibrium polarization. To put this unusual SNR in better perspective, we imaged SF(6) in a crack in a marble sample and imaged the lung tissue of a live rat (a more familiar variety of sample to many MRI scientists) using the same pulse sequence, the same size coils and the same MRI system. In both cases, we try to image subvoxel thin sheets of material that should appear bright against a darker background. By choosing imaging parameters appropriate for the different relaxation properties of SF(6) gas versus lung tissue and by choosing voxel sizes appropriate for the different goals of detecting subvoxel cracks on marble versus resolving subvoxel thin sheets of tissue, the SNR for voxels full of material was 220 and 14 for marble and lung, respectively. A major factor is that we chose large voxels to optimize SNR for detecting small cracks and we chose small voxels for resolving lung features at the expense of SNR. Imaging physics will cooperate to provide detection of small cracks on marble, but David's size poses a challenge for magnet designers. For the modest goal of imaging cracks in the left ankle, we desire a magnet with an approximately 32-cm gap and a flux density of approximately 0.36 T that weighs <500 kg.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
NASA Astrophysics Data System (ADS)
Wilbourn, E.; Thornton, D.; Brooks, S. D.; Graff, J.
2016-12-01
The role of marine aerosols as ice nucleating particles is currently poorly understood. Despite growing interest, there are remarkably few ice nucleation measurements on representative marine samples. Here we present results of heterogeneous ice nucleation from laboratory studies and in-situ air and sea water samples collected during NAAMES (North Atlantic Aerosol and Marine Ecosystems Study). Thalassiosira weissflogii (CCMP 1051) was grown under controlled conditions in batch cultures and the ice nucleating activity depended on the growth phase of the cultures. Immersion freezing temperatures of the lab-grown diatoms were determined daily using a custom ice nucleation apparatus cooled at a set rate. Our results show that the age of the culture had a significant impact on ice nucleation temperature, with samples in stationary phase causing nucleation at -19.9 °C, approximately nine degrees warmer than the freezing temperature during exponential growth phase. Field samples gathered during the NAAMES II cruise in May 2016 were also tested for ice nucleating ability. Two types of samples were gathered. Firstly, whole cells were fractionated by size from surface seawater using a BD Biosciences Influx Cell Sorter (BD BS ISD). Secondly, aerosols were generated using the SeaSweep and subsequently size-selected using a PIXE Cascade Impactor. Samples were tested for the presence of ice nucleating particles (INP) using the technique described above. There were significant differences in the freezing temperature of the different samples; of the three sample types the lab-grown cultures tested during stationary phase froze at the warmest temperatures, followed by the SeaSweep samples (-25.6 °C) and the size-fractionated cell samples (-31.3 °C). Differences in ice nucleation ability may be due to size differences between the INP, differences in chemical composition of the sample, or some combination of these two factors. Results will be presented and atmospheric implications discussed.
Broberg, Per
2013-07-19
One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
Johnson, K E; McMorris, B J; Raynor, L A; Monsen, K A
2013-01-01
The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data. We compared p-values and effect sizes (Cohen's d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times. On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen's d ≥ 0.80) was approximately 0.60 (range = 0.50 - 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record). Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.
Till, J.L.; Jackson, M.J.; Rosenbaum, J.G.; Solheid, P.
2011-01-01
The Tiva Canyon Tuff contains dispersed nanoscale Fe-Ti-oxide grains with a narrow magnetic grain size distribution, making it an ideal material in which to identify and study grain-size-sensitive magnetic behavior in rocks. A detailed magnetic characterization was performed on samples from the basal 5 m of the tuff. The magnetic materials in this basal section consist primarily of (low-impurity) magnetite in the form of elongated submicron grains exsolved from volcanic glass. Magnetic properties studied include bulk magnetic susceptibility, frequency-dependent and temperature-dependent magnetic susceptibility, anhysteretic remanence acquisition, and hysteresis properties. The combined data constitute a distinct magnetic signature at each stratigraphic level in the section corresponding to different grain size distributions. The inferred magnetic domain state changes progressively upward from superparamagnetic grains near the base to particles with pseudo-single-domain or metastable single-domain characteristics near the top of the sampled section. Direct observations of magnetic grain size confirm that distinct transitions in room temperature magnetic susceptibility and remanence probably denote the limits of stable single-domain behavior in the section. These results provide a unique example of grain-size-dependent magnetic properties in noninteracting particle assemblages over three decades of grain size, including close approximations of ideal Stoner-Wohlfarth assemblages, and may be considered a useful reference for future rock magnetic studies involving grain-size-sensitive properties.
Recovery of glass from the inert fraction refused by MBT plants in a pilot plant.
Dias, Nilmara; Garrinhas, Inés; Maximo, Angela; Belo, Nuno; Roque, Paulo; Carvalho, M Teresa
2015-12-01
Selective collection is a common practice in many countries. However, even in some of those countries there are recyclable materials, like packaging glass, erroneously deposited in the Mixed Municipal Solid Waste (MMSW). In the present paper, a solution is proposed to recover glass from the inert reject of Mechanical and Biological Treatment (MBT) plants treating MMSW aiming at its recycling. The inert reject of MBT (MBTr) plants is characterized by its small particle size and high heterogeneity. The study was made with three real samples of diverse characteristics superimposed mainly by the different upstream MBT. One of the samples (VN) had a high content in organics (approximately 50%) and a particle size smaller than 16 mm. The other two were coarser and exhibited similar particle size distribution but one (RE) was rich in glass (almost 70%) while the other (SD) contained about 40% in glass. A flowsheet was developed integrating drying, to eliminate moisture related with organic matter contamination; magnetic separation, to separate remaining small ferrous particles; vacuum suction, to eliminate light materials; screening, to eliminate the finer fraction that has a insignificant content in glass, and to classify the >6mm fraction in 6-16 mm and >16 mm fractions to be processed separately; separation by particle shape, in the RecGlass equipment specifically designed to eliminate stones; and optical sorting, to eliminate opaque materials. A pilot plant was built and the tests were conducted with the three samples separately. With all samples, it was possible to attain approximately 99% content in glass in the glass products, but the recovery of glass was related with the feed particle size. The finer the feed was, the lower the percentage of glass recovered in the glass product. The results show that each one of the separation processes was needed for product enrichment. The organic matter recovered in the glass product was high, ranging from 0.76% to 1.13%, showing that drying was not sufficient in the tests but that it is a key process for the success of the operation. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Gottlieb, Esther E.; And Others
Attitudes of Israeli senior faculty concerning research and teaching were evaluated using the Carnegie international questionnaire. Approximately one third of the total faculty population in Israel was randomly sampled, but stratified by institutional size. The questionnaire was sent to 2,225 faculty and 502 returned completed forms (22.56…
The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.
ERIC Educational Resources Information Center
Macpherson, David A.; Even, William E.
The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…
Bhattacharya, K; Tripathi, A K; Dey, G K; Gupta, N M
2005-05-01
Nanosize clusters of titania were dispersed in mesoporous MCM-41 silica matrix with the help of the incipient wet-impregnation route, using an isopropanol solution of titanium isopropoxide as precursor. The clusters thus formed were of pure anatase phase and their size depended upon the titania loading. In the case of low (< 15 wt %) loadings, the TiO2 particles were X-ray and laser-Raman amorphous, confirming very high dispersion. These particles were mostly of < or = 2 nm size. On the other hand, larger size clusters (2-15 nm) were present in a sample with a higher loading of approximately 21 wt %. These particles of titania, irrespective of their size, exhibited an absorbance behavior similar to that of bulk TiO2. Powder X-ray diffraction, N2-adsorption and transmission electron microscopy results showed that while smaller size particles were confined mostly inside the pore system, the larger size particles occupied the external surface of the host matrix. At the same time, the structural integrity of the host was maintained even though some deformation in the pore system was noticed in the case of the sample having highest loading. The core level X-ray photoelectron spectroscopy results revealed a + 4 valence state of Ti in all the samples. A positive binding energy shift and the increase of the width of Ti 2p peaks were observed, however, with the decrease in the particle size of supported titania crystallites, indicative of a microenvironment for surface sites that is different from that of the bulk.
The diffusion approximation. An application to radiative transfer in clouds
NASA Technical Reports Server (NTRS)
Arduini, R. F.; Barkstrom, B. R.
1976-01-01
It is shown how the radiative transfer equation reduces to the diffusion equation. To keep the mathematics as simple as possible, the approximation is applied to a cylindrical cloud of radius R and height h. The diffusion equation separates in cylindrical coordinates and, in a sample calculation, the solution is evaluated for a range of cloud radii with cloud heights of 0.5 km and 1.0 km. The simplicity of the method and the speed with which solutions are obtained give it potential as a tool with which to study the effects of finite-sized clouds on the albedo of the earth-atmosphere system.
Everett, C.R.; Chin, Y.-P.; Aiken, G.R.
1999-01-01
A 1,000-Dalton tangential-flow ultrafiltration (TFUF) membrane was used to isolate dissolved organic matter (DOM) from several freshwater environments. The TFUF unit used in this study was able to completely retain a polystyrene sulfonate 1,800-Dalton standard. Unaltered and TFUF-fractionated DOM molecular weights were assayed by high-pressure size exclusion chromatography (HPSEC). The weight-averaged molecular weights of the retentates were larger than those of the raw water samples, whereas the filtrates were all significantly smaller and approximately the same size or smaller than the manufacturer-specified pore size of the membrane. Moreover, at 280 nm the molar absorptivity of the DOM retained by the ultrafilter is significantly larger than the material in the filtrate. This observation suggests that most of the chromophoric components are associated with the higher molecular weight fraction of the DOM pool. Multivalent metals in the aqueous matrix also affected the molecular weights of the DOM molecules. Typically, proton-exchanged DOM retentates were smaller than untreated samples. This TFUF system appears to be an effective means of isolating aquatic DOM by size, but the ultimate size of the retentates may be affected by the presence of metals and by configurational properties unique to the DOM phase.
Gaeuman, David; Andrews, E.D.; Krause, Andreas; Smith, Wes
2009-01-01
Bed load samples from four locations in the Trinity River of northern California are analyzed to evaluate the performance of the Wilcock‐Crowe bed load transport equations for predicting fractional bed load transport rates. Bed surface particles become smaller and the fraction of sand on the bed increases with distance downstream from Lewiston Dam. The dimensionless reference shear stress for the mean bed particle size (τ*rm) is largest near the dam, but varies relatively little between the more downstream locations. The relation between τ*rm and the reference shear stresses for other size fractions is constant across all locations. Total bed load transport rates predicted with the Wilcock‐Crowe equations are within a factor of 2 of sampled transport rates for 68% of all samples. The Wilcock‐Crowe equations nonetheless consistently under‐predict the transport of particles larger than 128 mm, frequently by more than an order of magnitude. Accurate prediction of the transport rates of the largest particles is important for models in which the evolution of the surface grain size distribution determines subsequent bed load transport rates. Values of τ*rm estimated from bed load samples are up to 50% larger than those predicted with the Wilcock‐Crowe equations, and sampled bed load transport approximates equal mobility across a wider range of grain sizes than is implied by the equations. Modifications to the Wilcock‐Crowe equation for determining τ*rm and the hiding function used to scale τ*rm to other grain size fractions are proposed to achieve the best fit to observed bed load transport in the Trinity River.
Is it appropriate to composite fish samples for mercury trend monitoring and consumption advisories?
Gandhi, Nilima; Bhavsar, Satyendra P; Gewurtz, Sarah B; Drouillard, Ken G; Arhonditsis, George B; Petro, Steve
2016-03-01
Monitoring mercury levels in fish can be costly because variation by space, time, and fish type/size needs to be captured. Here, we explored if compositing fish samples to decrease analytical costs would reduce the effectiveness of the monitoring objectives. Six compositing methods were evaluated by applying them to an existing extensive dataset, and examining their performance in reproducing the fish consumption advisories and temporal trends. The methods resulted in varying amount (average 34-72%) of reductions in samples, but all (except one) reproduced advisories very well (96-97% of the advisories did not change or were one category more restrictive compared to analysis of individual samples). Similarly, the methods performed reasonably well in recreating temporal trends, especially when longer-term and frequent measurements were considered. The results indicate that compositing samples within 5cm fish size bins or retaining the largest/smallest individuals and compositing in-between samples in batches of 5 with decreasing fish size would be the best approaches. Based on the literature, the findings from this study are applicable to fillet, muscle plug and whole fish mercury monitoring studies. The compositing methods may also be suitable for monitoring Persistent Organic Pollutants (POPs) in fish. Overall, compositing fish samples for mercury monitoring could result in a substantial savings (approximately 60% of the analytical cost) and should be considered in fish mercury monitoring, especially in long-term programs or when study cost is a concern. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Cronin, Matthew A; Amstrup, Steven C; Talbot, Sandra L; Sage, George K; Amstrup, Kristin S
2009-01-01
Polar bears (Ursus maritimus) are unique among bears in that they are adapted to the Arctic sea ice environment. Genetic data are useful for understanding their evolution and can contribute to management. We assessed parentage and relatedness of polar bears in the southern Beaufort Sea, Alaska, with genetic data and field observations of age, sex, and mother-offspring and sibling relationships. Genotypes at 14 microsatellite DNA loci for 226 bears indicate that genetic variation is comparable to other populations of polar bears with mean number of alleles per locus of 7.9 and observed and expected heterozygosity of 0.71. The genetic data verified 60 field-identified mother-offspring pairs and identified 10 additional mother-cub pairs and 48 father-offspring pairs. The entire sample of related and unrelated bears had a mean pairwise relatedness index (r(xy)) of approximately zero, parent-offspring and siblings had r(xy) of approximately 0.5, and 5.2% of the samples had r(xy) values within the range expected for parent-offspring. Effective population size (N(e) = 277) and the ratio of N(e) to total population size (N(e)/N = 0.182) were estimated from the numbers of reproducing males and females. N(e) estimates with genetic methods gave variable results. Our results verify and expand field data on reproduction by females and provide new data on reproduction by males and estimates of relatedness and N(e) in a polar bear population.
Cronin, Matthew A.; Amstrup, Steven C.; Talbot, Sandra L.; Sage, George K.; Amstrup, Kristin S.
2009-01-01
Polar bears (Ursus maritimus) are unique among bears in that they are adapted to the Arctic sea ice environment. Genetic data are useful for understanding their evolution and can contribute to management. We assessed parentage and relatedness of polar bears in the southern Beaufort Sea, Alaska, with genetic data and field observations of age, sex, and mother–offspring and sibling relationships. Genotypes at 14 microsatellite DNA loci for 226 bears indicate that genetic variation is comparable to other populations of polar bears with mean number of alleles per locus of 7.9 and observed and expected heterozygosity of 0.71. The genetic data verified 60 field-identified mother–offspring pairs and identified 10 additional mother–cub pairs and 48 father–offspring pairs. The entire sample of related and unrelated bears had a mean pairwise relatedness index (rxy) of approximately zero, parent–offspring and siblings had rxy of approximately 0.5, and 5.2% of the samples had rxy values within the range expected for parent-offspring. Effective population size (Ne= 277) and the ratio of Ne to total population size (Ne/N = 0.182) were estimated from the numbers of reproducing males and females. Ne estimates with genetic methods gave variable results. Our results verify and expand field data on reproduction by females and provide new data on reproduction by males and estimates of relatedness and Ne in a polar bear population.
Rotondi, Michael A; O'Campo, Patricia; O'Brien, Kristen; Firestone, Michelle; Wolfe, Sara H; Bourgeois, Cheryllee; Smylie, Janet K
2017-12-26
To provide evidence of the magnitude of census undercounts of 'hard-to-reach' subpopulations and to improve estimation of the size of the urban indigenous population in Toronto, Canada, using respondent-driven sampling (RDS). Respondent-driven sampling. The study took place in the urban indigenous community in Toronto, Canada. Three locations within the city were used to recruit study participants. 908 adult participants (15+) who self-identified as indigenous (First Nation, Inuit or Métis) and lived in the city of Toronto. Study participants were generally young with over 60% of indigenous adults under the age of 45 years. Household income was low with approximately two-thirds of the sample living in households which earned less than $C20 000 last year. We collected baseline data on demographic characteristics, including indigenous identity, age, gender, income, household type and household size. Our primary outcome asked: 'Did you complete the 2011 Census Canada questionnaire?' Using RDS and our large-scale survey of the urban indigenous population in Toronto, Canada, we have shown that the most recent Canadian census underestimated the size of the indigenous population in Toronto by a factor of 2 to 4. Specifically, under conservative assumptions, there are approximately 55 000 (95% CI 45 000 to 73 000) indigenous people living in Toronto, at least double the current estimate of 19 270. Our indigenous enumeration methods, including RDS and census completion information will have broad impacts across governmental and health policy, potentially improving healthcare access for this community. These novel applications of RDS may be relevant for the enumeration of other 'hard-to-reach' populations, such as illegal immigrants or homeless individuals in Canada and beyond. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Stormer, Ame; Tun, Waimar; Harxhi, Arjan; Bodanovskaia, Zinaida; Yakovleva, Anna; Rusakova, Maia; Levina, Olga; Bani, Roland; Rjepaj, Klodian; Bino, Silva
2006-01-01
Injection drug users in Tirana, Albania and St. Petersburg, Russia were recruited into a study assessing HIV-related behaviors and HIV serostatus using Respondent Driven Sampling (RDS), a peer-driven recruitment sampling strategy that results in a probability sample. (Salganik M, Heckathorn DD. Sampling and estimation in hidden populations using respondent-driven sampling. Sociol Method. 2004;34:193–239). This paper presents a comparison of RDS implementation, findings on network and recruitment characteristics, and lessons learned. Initiated with 13 to 15 seeds, approximately 200 IDUs were recruited within 8 weeks. Information resulting from RDS indicates that social network patterns from the two studies differ greatly. Female IDUs in Tirana had smaller network sizes than male IDUs, unlike in St. Petersburg where female IDUs had larger network sizes than male IDUs. Recruitment patterns in each country also differed by demographic categories. Recruitment analyses indicate that IDUs form socially distinct groups by sex in Tirana, whereas there was a greater degree of gender mixing patterns in St. Petersburg. RDS proved to be an effective means of surveying these hard-to-reach populations. PMID:17075727
An Approximate Markov Model for the Wright-Fisher Diffusion and Its Application to Time Series Data.
Ferrer-Admetlla, Anna; Leuenberger, Christoph; Jensen, Jeffrey D; Wegmann, Daniel
2016-06-01
The joint and accurate inference of selection and demography from genetic data is considered a particularly challenging question in population genetics, since both process may lead to very similar patterns of genetic diversity. However, additional information for disentangling these effects may be obtained by observing changes in allele frequencies over multiple time points. Such data are common in experimental evolution studies, as well as in the comparison of ancient and contemporary samples. Leveraging this information, however, has been computationally challenging, particularly when considering multilocus data sets. To overcome these issues, we introduce a novel, discrete approximation for diffusion processes, termed mean transition time approximation, which preserves the long-term behavior of the underlying continuous diffusion process. We then derive this approximation for the particular case of inferring selection and demography from time series data under the classic Wright-Fisher model and demonstrate that our approximation is well suited to describe allele trajectories through time, even when only a few states are used. We then develop a Bayesian inference approach to jointly infer the population size and locus-specific selection coefficients with high accuracy and further extend this model to also infer the rates of sequencing errors and mutations. We finally apply our approach to recent experimental data on the evolution of drug resistance in influenza virus, identifying likely targets of selection and finding evidence for much larger viral population sizes than previously reported. Copyright © 2016 by the Genetics Society of America.
Dialdestoro, Kevin; Sibbesen, Jonas Andreas; Maretty, Lasse; Raghwani, Jayna; Gall, Astrid; Kellam, Paul; Pybus, Oliver G.; Hein, Jotun; Jenkins, Paul A.
2016-01-01
Human immunodeficiency virus (HIV) is a rapidly evolving pathogen that causes chronic infections, so genetic diversity within a single infection can be very high. High-throughput “deep” sequencing can now measure this diversity in unprecedented detail, particularly since it can be performed at different time points during an infection, and this offers a potentially powerful way to infer the evolutionary dynamics of the intrahost viral population. However, population genomic inference from HIV sequence data is challenging because of high rates of mutation and recombination, rapid demographic changes, and ongoing selective pressures. In this article we develop a new method for inference using HIV deep sequencing data, using an approach based on importance sampling of ancestral recombination graphs under a multilocus coalescent model. The approach further extends recent progress in the approximation of so-called conditional sampling distributions, a quantity of key interest when approximating coalescent likelihoods. The chief novelties of our method are that it is able to infer rates of recombination and mutation, as well as the effective population size, while handling sampling over different time points and missing data without extra computational difficulty. We apply our method to a data set of HIV-1, in which several hundred sequences were obtained from an infected individual at seven time points over 2 years. We find mutation rate and effective population size estimates to be comparable to those produced by the software BEAST. Additionally, our method is able to produce local recombination rate estimates. The software underlying our method, Coalescenator, is freely available. PMID:26857628
A Bayesian Approach to the Overlap Analysis of Epidemiologically Linked Traits.
Asimit, Jennifer L; Panoutsopoulou, Kalliope; Wheeler, Eleanor; Berndt, Sonja I; Cordell, Heather J; Morris, Andrew P; Zeggini, Eleftheria; Barroso, Inês
2015-12-01
Diseases often cooccur in individuals more often than expected by chance, and may be explained by shared underlying genetic etiology. A common approach to genetic overlap analyses is to use summary genome-wide association study data to identify single-nucleotide polymorphisms (SNPs) that are associated with multiple traits at a selected P-value threshold. However, P-values do not account for differences in power, whereas Bayes' factors (BFs) do, and may be approximated using summary statistics. We use simulation studies to compare the power of frequentist and Bayesian approaches with overlap analyses, and to decide on appropriate thresholds for comparison between the two methods. It is empirically illustrated that BFs have the advantage over P-values of a decreasing type I error rate as study size increases for single-disease associations. Consequently, the overlap analysis of traits from different-sized studies encounters issues in fair P-value threshold selection, whereas BFs are adjusted automatically. Extensive simulations show that Bayesian overlap analyses tend to have higher power than those that assess association strength with P-values, particularly in low-power scenarios. Calibration tables between BFs and P-values are provided for a range of sample sizes, as well as an approximation approach for sample sizes that are not in the calibration table. Although P-values are sometimes thought more intuitive, these tables assist in removing the opaqueness of Bayesian thresholds and may also be used in the selection of a BF threshold to meet a certain type I error rate. An application of our methods is used to identify variants associated with both obesity and osteoarthritis. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Trakumas, S.; Salter, E.
2009-02-01
Adverse health effects due to exposure to airborne particles are associated with particle deposition within the human respiratory tract. Particle size, shape, chemical composition, and the individual physiological characteristics of each person determine to what depth inhaled particles may penetrate and deposit within the respiratory tract. Various particle inertial classification devices are available to fractionate airborne particles according to their aerodynamic size to approximate particle penetration through the human respiratory tract. Cyclones are most often used to sample thoracic or respirable fractions of inhaled particles. Extensive studies of different cyclonic samplers have shown, however, that the sampling characteristics of cyclones do not follow the entire selected convention accurately. In the search for a more accurate way to assess worker exposure to different fractions of inhaled dust, a novel sampler comprising several inertial impactors arranged in parallel was designed and tested. The new design includes a number of separated impactors arranged in parallel. Prototypes of respirable and thoracic samplers each comprising four impactors arranged in parallel were manufactured and tested. Results indicated that the prototype samplers followed closely the penetration characteristics for which they were designed. The new samplers were found to perform similarly for liquid and solid test particles; penetration characteristics remained unchanged even after prolonged exposure to coal mine dust at high concentration. The new parallel impactor design can be applied to approximate any monotonically decreasing penetration curve at a selected flow rate. Personal-size samplers that operate at a few L/min as well as area samplers that operate at higher flow rates can be made based on the suggested design. Performance of such samplers can be predicted with high accuracy employing well-established impaction theory.
GWAS of 126,559 Individuals Identifies Genetic Variants Associated with Educational Attainment
Rietveld, Cornelius A.; Medland, Sarah E.; Derringer, Jaime; Yang, Jian; Esko, Tõnu; Martin, Nicolas W.; Westra, Harm-Jan; Shakhbazov, Konstantin; Abdellaoui, Abdel; Agrawal, Arpana; Albrecht, Eva; Alizadeh, Behrooz Z.; Amin, Najaf; Barnard, John; Baumeister, Sebastian E.; Benke, Kelly S.; Bielak, Lawrence F.; Boatman, Jeffrey A.; Boyle, Patricia A.; Davies, Gail; de Leeuw, Christiaan; Eklund, Niina; Evans, Daniel S.; Ferhmann, Rudolf; Fischer, Krista; Gieger, Christian; Gjessing, Håkon K.; Hägg, Sara; Harris, Jennifer R.; Hayward, Caroline; Holzapfel, Christina; Ibrahim-Verbaas, Carla A.; Ingelsson, Erik; Jacobsson, Bo; Joshi, Peter K.; Jugessur, Astanand; Kaakinen, Marika; Kanoni, Stavroula; Karjalainen, Juha; Kolcic, Ivana; Kristiansson, Kati; Kutalik, Zoltán; Lahti, Jari; Lee, Sang H.; Lin, Peng; Lind, Penelope A.; Liu, Yongmei; Lohman, Kurt; Loitfelder, Marisa; McMahon, George; Vidal, Pedro Marques; Meirelles, Osorio; Milani, Lili; Myhre, Ronny; Nuotio, Marja-Liisa; Oldmeadow, Christopher J.; Petrovic, Katja E.; Peyrot, Wouter J.; Polašek, Ozren; Quaye, Lydia; Reinmaa, Eva; Rice, John P.; Rizzi, Thais S.; Schmidt, Helena; Schmidt, Reinhold; Smith, Albert V.; Smith, Jennifer A.; Tanaka, Toshiko; Terracciano, Antonio; van der Loos, Matthijs J.H.M.; Vitart, Veronique; Völzke, Henry; Wellmann, Jürgen; Yu, Lei; Zhao, Wei; Allik, Jüri; Attia, John R.; Bandinelli, Stefania; Bastardot, François; Beauchamp, Jonathan; Bennett, David A.; Berger, Klaus; Bierut, Laura J.; Boomsma, Dorret I.; Bültmann, Ute; Campbell, Harry; Chabris, Christopher F.; Cherkas, Lynn; Chung, Mina K.; Cucca, Francesco; de Andrade, Mariza; De Jager, Philip L.; De Neve, Jan-Emmanuel; Deary, Ian J.; Dedoussis, George V.; Deloukas, Panos; Dimitriou, Maria; Eiriksdottir, Gudny; Elderson, Martin F.; Eriksson, Johan G.; Evans, David M.; Faul, Jessica D.; Ferrucci, Luigi; Garcia, Melissa E.; Grönberg, Henrik; Gudnason, Vilmundur; Hall, Per; Harris, Juliette M.; Harris, Tamara B.; Hastie, Nicholas D.; Heath, Andrew C.; Hernandez, Dena G.; Hoffmann, Wolfgang; Hofman, Adriaan; Holle, Rolf; Holliday, Elizabeth G.; Hottenga, Jouke-Jan; Iacono, William G.; Illig, Thomas; Järvelin, Marjo-Riitta; Kähönen, Mika; Kaprio, Jaakko; Kirkpatrick, Robert M.; Kowgier, Matthew; Latvala, Antti; Launer, Lenore J.; Lawlor, Debbie A.; Lehtimäki, Terho; Li, Jingmei; Lichtenstein, Paul; Lichtner, Peter; Liewald, David C.; Madden, Pamela A.; Magnusson, Patrik K. E.; Mäkinen, Tomi E.; Masala, Marco; McGue, Matt; Metspalu, Andres; Mielck, Andreas; Miller, Michael B.; Montgomery, Grant W.; Mukherjee, Sutapa; Nyholt, Dale R.; Oostra, Ben A.; Palmer, Lyle J.; Palotie, Aarno; Penninx, Brenda; Perola, Markus; Peyser, Patricia A.; Preisig, Martin; Räikkönen, Katri; Raitakari, Olli T.; Realo, Anu; Ring, Susan M.; Ripatti, Samuli; Rivadeneira, Fernando; Rudan, Igor; Rustichini, Aldo; Salomaa, Veikko; Sarin, Antti-Pekka; Schlessinger, David; Scott, Rodney J.; Snieder, Harold; Pourcain, Beate St; Starr, John M.; Sul, Jae Hoon; Surakka, Ida; Svento, Rauli; Teumer, Alexander; Tiemeier, Henning; Rooij, Frank JAan; Van Wagoner, David R.; Vartiainen, Erkki; Viikari, Jorma; Vollenweider, Peter; Vonk, Judith M.; Waeber, Gérard; Weir, David R.; Wichmann, H.-Erich; Widen, Elisabeth; Willemsen, Gonneke; Wilson, James F.; Wright, Alan F.; Conley, Dalton; Davey-Smith, George; Franke, Lude; Groenen, Patrick J. F.; Hofman, Albert; Johannesson, Magnus; Kardia, Sharon L.R.; Krueger, Robert F.; Laibson, David; Martin, Nicholas G.; Meyer, Michelle N.; Posthuma, Danielle; Thurik, A. Roy; Timpson, Nicholas J.; Uitterlinden, André G.; van Duijn, Cornelia M.; Visscher, Peter M.; Benjamin, Daniel J.; Cesarini, David; Koellinger, Philipp D.
2013-01-01
A genome-wide association study of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent SNPs are genome-wide significant (rs9320913, rs11584700, rs4851266), and all three replicate. Estimated effects sizes are small (R2 ≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈ 2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates can anchor power analyses in social-science genetics. PMID:23722424
Stress Dependence of Microstructures in Experimentally Deformed Calcite
NASA Astrophysics Data System (ADS)
Platt, J. P.; De Bresser, J. H. P.
2017-12-01
Measurements of dynamically recrystallized grain size (Dr), subgrain size (Sg), minimum bulge size (Blg), and the maximum scale length for surface-energy driven grain-boundary migration (γGBM) in experimentally deformed Cararra marble help define the dependence of these microstructural features on stress and temperature. Measurements were made optically on ultra-thin sections in order to allow these features to be defined during measurement on the basis of microstructural setting and geometry. Taken together with previously published data Dr defines a paleopiezometer with a stress exponent of -1.09. There is no discernible temperature dependence over the 500°C temperature range of the experiments. Recrystallization occured mainly by bulging and subgrain rotation, and the two processes operated together, so that it is not possible to separate grains nucleated by the two mechanisms. Sg and Dr measured in the same samples are closely similar in size, suggesting that new grains do not grow significantly after nucleation, and that subgrain size is likely to be the primary control on recrystallized grain size. Blg and γGBM measured on each sample define a relationship to stress with an exponent of approximately -1.6, which helps define the boundary in stress - grain-size space between a region of dominant strain-energy-driven grain-boundary migration at high stress, from a region of dominant surface-energy-driven grain-boundary migration at low stress.
Magnetic Force Microscopy Investigation of Magnetic Domains in Nd2Fe14B
NASA Astrophysics Data System (ADS)
Talari, Mahesh Kumar; Markandeyulu, G.; Rao, K. Prasad
2010-07-01
Remenance and coercivity in Nd2Fe14B materials are strongly dependent on the microstructural aspects like phases morphology and grain size. The coercivity (Hc) of a magnetic material varies inversely with the grain size (D) and there is a critical size below which Hc∝D6. Domain wall pinning by grain boundaries and foreign phases is the important mechanism in explaining the improvement in coercivity and remenance. Nd2Fe14B intermetallic compound with stochiometric composition was prepared from pure elements (Nd -99.5%, Fe—99.95%, B -99.99%) by arc melting in argon atmosphere. Magnetic Force Microscope (MFM) gives high-resolution magnetic domain structural information of ferromagnetic samples. DI-3100 Scanning Probe Microscope with MESP probes was used For MFM characterization of the samples. Magnetic domains observed in cast ingots were very long (up to 40 μm were observed) and approximately 1-5 μm wide due to high anisotropy of the compounds. Magnetic domains have displayed different image contrast and morphologies at different locations of the samples. The domain morphologies and image contrast obtained in this analysis were explained in this paper.
NASA Astrophysics Data System (ADS)
Reitz, M. A.; Seeber, L.; Schaefer, J. M.; Ferguson, E. K.
2012-12-01
Early studies pioneering the method for catchment wide erosion rates by measuring 10Be in alluvial sediment were taken at river mouths and used the sand size grain fraction from the riverbeds in order to average upstream erosion rates and measure erosion patterns. Finer particles (<0.0625 mm) were excluded to reduce the possibility of a wind-blown component of sediment and coarser particles (>2 mm) were excluded to better approximate erosion from the entire upstream catchment area (coarse grains are generally found near the source). Now that the sensitivity of 10Be measurements is rapidly increasing, we can precisely measure erosion rates from rivers eroding active tectonic regions. These active regions create higher energy drainage systems that erode faster and carry coarser sediment. In these settings, does the sand-sized fraction fully capture the average erosion of the upstream drainage area? Or does a different grain size fraction provide a more accurate measure of upstream erosion? During a study of the Neto River in Calabria, southern Italy, we took 8 samples along the length of the river, focusing on collecting samples just below confluences with major tributaries, in order to use the high-resolution erosion rate data to constrain tectonic motion. The samples we measured were sieved to either a 0.125 mm - 0.710 mm fraction or the 0.125 mm - 4 mm fraction (depending on how much of the former was available). After measuring these 8 samples for 10Be and determining erosion rates, we used the approach by Granger et al. [1996] to calculate the subcatchment erosion rates between each sample point. In the subcatchments of the river where we used grain sizes up to 4 mm, we measured very low 10Be concentrations (corresponding to high erosion rates) and calculated nonsensical subcatchment erosion rates (i.e. negative rates). We, therefore, hypothesize that the coarser grain sizes we included are preferentially sampling a smaller upstream area, and not the entire upstream catchment, which is assumed when measurements are based solely on the sand sized fraction. To test this hypothesis, we used samples with a variety of grain sizes from the Shillong Plateau. We sieved 5 samples into three grain size fractions: 0.125 mm - 710 mm, 710 mm - 4 mm, and >4 mm and measured 10Be concentrations in each fraction. Although there is some variation in the grain size fraction that yields the highest erosion rate, generally, the coarser grain size fractions have higher erosion rates. More significant are the results when calculating the subcatchment erosion rates, which suggest that even medium sized grains (710 mm - 4 mm) are sampling an area smaller than the entire upstream area; this finding is consistent with the nonsensical results from the Neto River study. This result has numerous implications for the interpretations of 10Be erosion rates: most importantly, an alluvial sample may not be averaging the entire upstream area, even when using the sand size fraction - resulting erosion rates more pertinent for that sample point rather than the entire catchment.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D
2006-01-01
Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943
Degradation of radiator performance on Mars due to dust
NASA Technical Reports Server (NTRS)
Gaier, James R.; Perez-Davis, Marla E.; Rutledge, Sharon K.; Forkapa, Mark
1992-01-01
An artificial mineral of the approximate elemental composition of Martian soil was manufactured, crushed, and sorted into four different size ranges. Dust particles from three of these size ranges were applied to arc-textured Nb-1 percent Zr and Cu radiator surfaces to assess their effect on radiator performance. Particles larger than 75 microns did not have sufficient adhesive forces to adhere to the samples at angles greater than about 27 deg. Pre-deposited dust layers were largely removed by clear wind velocities greater than 40 m/s, or by dust-laden wind velocities as low as 25 m/s. Smaller dust grains were more difficult to remove. Abrasion was found to be significant only in high velocity winds (89 m/s or greater). Dust-laden winds were found to be more abrasive than clear wind. Initially dusted samples abraded less than initially clear samples in dust laden wind. Smaller dust particles of the simulant proved to be more abrasive than large. This probably indicates that the larger particles were in fact agglomerates.
Surface degassing and modifications to vesicle size distributions in active basalt flows
Cashman, K.V.; Mangan, M.T.; Newman, S.
1994-01-01
The character of the vesicle population in lava flows includes several measurable parameters that may provide important constraints on lava flow dynamics and rheology. Interpretation of vesicle size distributions (VSDs), however, requires an understanding of vesiculation processes in feeder conduits, and of post-eruption modifications to VSDs during transport and emplacement. To this end we collected samples from active basalt flows at Kilauea Volcano: (1) near the effusive Kupaianaha vent; (2) through skylights in the approximately isothermal Wahaula and Kamoamoa tube systems transporting lava to the coast; (3) from surface breakouts at different locations along the lava tubes; and (4) from different locations in a single breakout from a lava tube 1 km from the 51 vent at Pu'u 'O'o. Near-vent samples are characterized by VSDs that show exponentially decreasing numbers of vesicles with increasing vesicle size. These size distributions suggest that nucleation and growth of bubbles were continuous during ascent in the conduit, with minor associated bubble coalescence resulting from differential bubble rise. The entire vesicle population can be attributed to shallow exsolution of H2O-dominated gases at rates consistent with those predicted by simple diffusion models. Measurements of H2O, CO2 and S in the matrix glass show that the melt equilibrated rapidly at atmospheric pressure. Down-tube samples maintain similar VSD forms but show a progressive decrease in both overall vesicularity and mean vesicle size. We attribute this change to open system, "passive" rise and escape of larger bubbles to the surface. Such gas loss from the tube system results in the output of 1.2 ?? 106 g/day SO2, an output representing an addition of approximately 1% to overall volatile budget calculations. A steady increase in bubble number density with downstream distance is best explained by continued bubble nucleation at rates of 7-8/cm3s. Rates are ???25% of those estimated from the vent samples, and thus represent volatile supersaturations considerably less than those of the conduit. We note also that the small total volume represented by this new bubble population does not: (1) measurably deplete the melt in volatiles; or (2) make up for the overall vesicularity decrease resulting from the loss of larger bubbles. Surface breakout samples have distinctive VSDs characterized by an extreme depletion in the small vesicle population. This results in samples with much lower number densities and larger mean vesicle sizes than corresponding tube samples. Similar VSD patterns have been observed in solidified lava flows and are interpreted to result from either static (wall rupture) or dynamic (bubble rise and capture) coalescence. Through comparison with vent and tube vesicle populations, we suggest that, in addition to coalescence, the observed vesicle populations in the breakout samples have experienced a rapid loss of small vesicles consistent with 'ripening' of the VSD resulting from interbubble diffusion of volatiles. Confinement of ripening features to surface flows suggests that the thin skin that forms on surface breakouts may play a role in the observed VSD modification. ?? 1994.
SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.
Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman
2017-03-01
We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).
Prostatic origin of a zinc binding high molecular weight protein complex in human seminal plasma.
Siciliano, L; De Stefano, C; Petroni, M F; Vivacqua, A; Rago, V; Carpino, A
2000-03-01
The profile of the zinc ligand high molecular weight proteins was investigated in the seminal plasma of 55 normozoospermic subjects by size exclusion high performance liquid chromatography (HPLC). The proteins were recovered from Sephadex G-75 gel filtration of seminal plasma in three zinc-containing fractions which were then submitted to HPLC analysis. The results were, that in all the samples, the protein profiles showed two peaks with apparent molecular weight of approximately 660 and approximately 250 kDa. Dialysis experiments revealed that both approximately 660 and approximately 250 kDa proteins were able to uptake zinc against gradient indicating their zinc binding capacity. The HPLC analysis of the whole seminal plasma evidenced only the approximately 660 kDa protein complex as a single well quantifying peak, furthermore a positive correlation between its peak area and the seminal zinc values (P < 0.001) was observed. This suggested a prostatic origin of the approximately 660 kDa protein complex which was then confirmed by the seminal plasma HPLC analysis of a subject with agenesis of the Wolffian ducts. Finally the study demonstrated the presence of two zinc binding proteins, approximately 660 and approximately 250 kDa respectively, in human seminal plasma and the prostatic origin of the approximately 660 kDa.
Improving the chi-squared approximation for bivariate normal tolerance regions
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.
1993-01-01
Let X be a two-dimensional random variable distributed according to N2(mu,Sigma) and let bar-X and S be the respective sample mean and covariance matrix calculated from N observations of X. Given a containment probability beta and a level of confidence gamma, we seek a number c, depending only on N, beta, and gamma such that the ellipsoid R = (x: (x - bar-X)'S(exp -1) (x - bar-X) less than or = c) is a tolerance region of content beta and level gamma; i.e., R has probability gamma of containing at least 100 beta percent of the distribution of X. Various approximations for c exist in the literature, but one of the simplest to compute -- a multiple of the ratio of certain chi-squared percentage points -- is badly biased for small N. For the bivariate normal case, most of the bias can be removed by simple adjustment using a factor A which depends on beta and gamma. This paper provides values of A for various beta and gamma so that the simple approximation for c can be made viable for any reasonable sample size. The methodology provides an illustrative example of how a combination of Monte-Carlo simulation and simple regression modelling can be used to improve an existing approximation.
ERIC Educational Resources Information Center
Harnisch, Delwyn L.; Ryan, Katherine E.
A study was made of cross-cultural patterns of achievement motivation in relationship to the mathematics achievement of Japanese and American boys and girls approximately 16 years of age. Sample sizes were 9,582 for the United States subjects (specifically, from Illinois) and 1,700 for the participants from Japan. Data came from performance on the…
NASA Astrophysics Data System (ADS)
Kim, T. W.; Yarnell, S. M.; Yager, E.; Leidman, S. Z.
2015-12-01
Caspar Creek is a gravel-bedded stream located in the Jackson Demonstration State Forest in the coast range of California. The Caspar Creek Experimental Watershed has been actively monitored and studied by the Pacific Southwest Research Station and California Department of Forestry and Fire Protection for over five decades. Although total annual sediment yield has been monitored through time, sediment transport during individual storm events is less certain. At a study site on North Fork Caspar Creek, cross-section averaged sediment flux was collected throughout two storm events in December 2014 and February 2015 to determine if two commonly used sediment transport equations—Meyer-Peter-Müller and Wilcock—approximated observed bedload transport. Cross-section averaged bedload samples were collected approximately every hour during each storm event using a Helley-Smith bedload sampler. Five-minute composite samples were collected at five equally spaced locations along a cross-section and then sieved to half-phi sizes to determine the grain size distribution. The measured sediment flux values varied widely throughout the storm hydrographs and were consistently less than two orders of magnitude in value in comparison to the calculated values. Armored bed conditions, changing hydraulic conditions during each storm and variable sediment supply may have contributed to the observed differences.
Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.
Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann
2017-01-01
Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.
Linking phytoplankton community metabolism to the individual size distribution.
Padfield, Daniel; Buckling, Angus; Warfield, Ruth; Lowe, Chris; Yvon-Durocher, Gabriel
2018-05-25
Quantifying variation in ecosystem metabolism is critical to predicting the impacts of environmental change on the carbon cycle. We used a metabolic scaling framework to investigate how body size and temperature influence phytoplankton community metabolism. We tested this framework using phytoplankton sampled from an outdoor mesocosm experiment, where communities had been either experimentally warmed (+ 4 °C) for 10 years or left at ambient temperature. Warmed and ambient phytoplankton communities differed substantially in their taxonomic composition and size structure. Despite this, the response of primary production and community respiration to long- and short-term warming could be estimated using a model that accounted for the size- and temperature dependence of individual metabolism, and the community abundance-body size distribution. This work demonstrates that the key metabolic fluxes that determine the carbon balance of planktonic ecosystems can be approximated using metabolic scaling theory, with knowledge of the individual size distribution and environmental temperature. © 2018 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Liu, Bin
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.
NASA Technical Reports Server (NTRS)
Goguen, Jay D.
1993-01-01
To test the hypothesis that the independent scattering calculation widely used to model radiative transfer in atmospheres and clouds will give a useful approximation to the intensity and linear polarization of visible light scattered from an optically thick surface of transparent particles, laboratory measurements are compared to the independent scattering calculation for a surface of spherical particles with known optical constants and size distribution. Because the shape, size distribution, and optical constants of the particles are known, the independent scattering calculation is completely determined and the only remaining unknown is the net effect of the close packing of the particles in the laboratory sample surface...
Ultrasonic Porosity Estimation of Low-Porosity Ceramic Samples
NASA Astrophysics Data System (ADS)
Eskelinen, J.; Hoffrén, H.; Kohout, T.; Hæggström, E.; Pesonen, L. J.
2007-03-01
We report on efforts to extend the applicability of an airborne ultrasonic pulse-reflection (UPR) method towards lower porosities. UPR is a method that has been used successfully to estimate porosity and tortuosity of high porosity foams. UPR measures acoustical reflectivity of a target surface at two or more incidence angles. We used ceramic samples to evaluate the feasibility of extending the UPR range into low porosities (<35%). The validity of UPR estimates depends on pore size distribution and probing frequency as predicted by the theoretical boundary conditions of the used equivalent fluid model under the high-frequency approximation.
NASA Technical Reports Server (NTRS)
Zeigler, Ryan A.
2015-01-01
The Apollo missions collected 382 kg of rock and regolith from the Moon; approximately 1/3 of the sample mass collected was regolith. Lunar regolith consists of well mixed rocks, minerals, and glasses less than 1-centimeter n size. The majority of most surface regolith samples were sieved into less than 1, 1-2, 2-4, and 4-10- millimiter size fractions; a portion of most samples was re-served unsieved. The initial characterization and classification of most Apollo regolith particles was done primarily by binocular microscopy. Optical classification of regolith is difficult because (1) the finest fraction of the regolith coats and obscures the textures of the larger particles, and (b) not all lithologies or minerals are uniquely identifiable optically. In recent years, we have begun to use more modern x-ray beam techniques [1-3], coupled with high resolution 3D optical imaging techniques [4] to characterize Apollo and meteorite samples as part of the curation process. These techniques, particularly in concert with SEM imaging of less than 1-millimeter regolith grain mounts, allow for the rapid characterization of the components within a regolith.
Simon, Nancy S.; Ingle, Sarah N.
2011-01-01
μThis study of phosphorus (P) cycling in eutrophic Upper Klamath Lake (UKL), Oregon, was conducted by the U.S. Geological Survey in cooperation with the U.S. Bureau of Reclamation. Lakebed sediments from the upper 30 centimeters (cm) of cores collected from 26 sites were characterized. Cores were sampled at 0.5, 1.5, 2.5, 3.5, 4.5, 10, 15, 20, 25, and 30 cm. Prior to freezing, water content and sediment pH were determined. After being freeze-dried, all samples were separated into greater than 63-micron (μm) particle-size (coarse) and less than 63-μm particle-size (fine) fractions. In the surface samples (0.5 to 4.5 cm below the sediment water interface), approximately three-fourths of the particles were larger than 63-μm. The ratios of the coarse particle-size fraction (>63 μm) and the fine particle-size fraction (<63 μm) were approximately equal in samples at depths greater than 10 cm below the sediment water interface. Chemical analyses included both size fractions of freeze-dried samples. Chemical analyses included determination of total concentrations of aluminum (Al), calcium (Ca), carbon (C), iron (Fe), poorly crystalline Fe, nitrogen (N), P, and titanium (Ti). Total Fe concentrations were the largest in sediment from the northern portion of UKL, Howard Bay, and the southern portion of the lake. Concentrations of total Al, Ca, and Ti were largest in sediment from the northern, central, and southernmost portions of the lake and in sediment from Howard Bay. Concentrations of total C and N were largest in sediment from the embayments and in sediment from the northern arm and southern portion of the lake in the general region of Buck Island. Concentrations of total C were larger in the greater than 63-μm particle-size fraction than in the less than 63-μm particle-size fraction. Sediments were sequentially extracted to determine concentrations of inorganic forms of P, including loosely sorbed P, P associated with poorly crystalline Fe oxides, and P associated with mineral phases. The difference between the concentration of total P and sum of the concentrations of inorganic forms of P is referred to as residual P. Residual P was the largest fraction of P in all of the sediment samples. In UKL, the correlation between concentrations of total P and total Fe in sediment is poor (R2<0.1). The correlation between the concentrations of total P and P associated with poorly crystalline Fe oxides is good (R2=0.43) in surface sediment (0.5-4.5 cm below the sediment water interface) but poor (R2<0.1) in sediments at depths between 10 cm and 30 cm. Phosphorus associated with poorly crystalline Fe oxides is considered bioavailable because it is released when sediment conditions change from oxidizing to reducing, which causes dissolution of Fe oxides.
Time-dependent preparation of gelatin-stabilized silver nanoparticles by pulsed Nd:YAG laser
NASA Astrophysics Data System (ADS)
Darroudi, Majid; Ahmad, M. B.; Zamiri, Reza; Abdullah, A. H.; Ibrahim, N. A.; Sadrolhosseini, A. R.
2011-03-01
Colloidal silver nanoparticles (Ag-NPs) were successfully prepared using a nanosecond pulsed Nd:YAG laser, λ = 1064 nm, with laser fluence of approximately about 360 mJ/pulse, in an aqueous gelatin solution. In this work, gelatin was used as a stabilizer, and the size and optical absorption properties of samples were studied as a function of the laser ablation times. The results from the UV-vis spectroscopy demonstrated that the mean diameter of Ag-NPs decrease as the laser ablation time increases. The Ag-NPs have mean diameters ranging from approximately 10 nm to 16 nm. Compared with other preparation methods, this work is clean, rapid, and simple to use.
NASA Astrophysics Data System (ADS)
Negassa, Wakene; Guber, Andrey; Kravchenko, Alexandra; Rivers, Mark
2014-05-01
Soil's potential to sequester carbon (C) depends not only on quality and quantity of organic inputs to soil but also on the residence time of the applied organic inputs within the soil. Soil pore structure is one of the main factors that influence residence time of soil organic matter by controlling gas exchange, soil moisture and microbial activities, thereby soil C sequestration capacity. Previous attempts to investigate the fate of organic inputs added to soil did not allow examining their decomposition in situ; the drawback that can now be remediated by application of X-ray computed micro-tomography (µ-CT). The non-destructive and non-invasive nature of µ-CT gives an opportunity to investigate the effect of soil pore size distributions on decomposition of plant residues at a new quantitative level. The objective of this study is to examine the influence of pore size distributions on the decomposition of plant residue added to soil. Samples with contrasting pore size distributions were created using aggregate fractions of five different sizes (<0.05, 0.05-0.1, 0.10-05, 0.5-1.0 and 1.0-2.0 mm). Weighted average pore diameters ranged from 10 µm (<0.05 mm fraction) to 104 µm (1-2 mm fraction), while maximum pore diameter were in a range from 29 µm (<0.05 mm fraction) to 568 µm (1-2 mm fraction) in the created soil samples. Dried pieces of maize leaves 2.5 mg in size (equivalent to 1.71 mg C g-1 soil) were added to half of the studied samples. Samples with and without maize leaves were incubated for 120 days. CO2 emission from the samples was measured at regular time intervals. In order to ensure that the observed differences are due to differences in pore structure and not due to differences in inherent properties of the studied aggregate fractions, we repeated the whole experiment using soil from the same aggregate size fractions but ground to <0.05 mm size. Five to six replicated samples were used for intact and ground samples of all sizes with and without leaves. Two replications of the intact aggregate fractions of all sizes with leaves were subjected to µ-CT scanning before and after incubation, whereas all the remaining replications of both intact and ground aggregate fractions of <0.05, 0.05-0.1, and 1.0-2.0 mm sizes with leaves were scanned with µ-CT after the incubation. The µ-CT image showed that approximately 80% of the leaves in the intact samples of large aggregate fractions (0.5-1.0 and 1.0-2.0 mm) was decomposed during the incubation, while only 50-60% of the leaves were decomposed in the intact samples of smaller sized fractions. Even lower percent of leaves (40-50%) was decomposed in the ground samples, with very similar leaf decomposition observed in all ground samples regardless of the aggregate fraction size. Consistent with µ-CT results, the proportion of decomposed leaf estimated with the conventional mass loss method was 48% and 60% for the <0.05 mm and 1.0-2.0 mm soil size fractions of intact aggregates, and 40-50% in ground samples, respectively. The results of the incubation experiment demonstrated that, while greater C mineralization was observed in samples of all size fractions amended with leaf, the effect of leaf presence was most pronounced in the smaller aggregate fractions (0.05-0.1 mm and 0.05 mm) of intact aggregates. The results of the present study unequivocally demonstrate that differences in pore size distributions have a major effect on the decomposition of plant residues added to soil. Moreover, in presence of plant residues, differences in pore size distributions appear to also influence the rates of decomposition of the intrinsic soil organic material.
The effect of external forces on discrete motion within holographic optical tweezers.
Eriksson, E; Keen, S; Leach, J; Goksör, M; Padgett, M J
2007-12-24
Holographic optical tweezers is a widely used technique to manipulate the individual positions of optically trapped micron-sized particles in a sample. The trap positions are changed by updating the holographic image displayed on a spatial light modulator. The updating process takes a finite time, resulting in a temporary decrease of the intensity, and thus the stiffness, of the optical trap. We have investigated this change in trap stiffness during the updating process by studying the motion of an optically trapped particle in a fluid flow. We found a highly nonlinear behavior of the change in trap stiffness vs. changes in step size. For step sizes up to approximately 300 nm the trap stiffness is decreasing. Above 300 nm the change in trap stiffness remains constant for all step sizes up to one particle radius. This information is crucial for optical force measurements using holographic optical tweezers.
Park, Seungshik; Son, Se-Chang
2016-01-01
This study investigates the size distribution and possible sources of humic-like substances (HULIS) in ambient aerosol particles collected at an urban site in Gwangju, Korea during the winter of 2015. A total of 10 sets of size-segregated aerosol samples were collected using a 10-stage Micro-Orifice Uniform Deposit Impactor (MOUDI), and the samples were analyzed to determine the mass as well as the presence of ionic species (Na(+), NH4(+), K(+), Ca(2+), Mg(2+), Cl(-), NO3(-), and SO4(2-)), water-soluble organic carbon (WSOC) and HULIS. The separation and quantification of the size-resolved HULIS components from the MOUDI samples was accomplished using a Hydrophilic-Lipophilic Balanced (HLB) solid phase extraction method and a total organic carbon analyzer, respectively. The entire sampling period was divided into two periods: non-Asian dust (NAD) and Asian dust (AD) periods. The contributions of water-soluble organic mass (WSOM = 1.9 × WSOC) and HULIS (=1.9 × HULIS-C) to fine particles (PM1.8) were approximately two times higher in the NAD samples (23.2 and 8.0%) than in the AD samples (12.8 and 4.2%). However, the HULIS-C/WSOC ratio in PM1.8 showed little difference between the NAD (0.35 ± 0.07) and AD (0.35 ± 0.05) samples. The HULIS exhibited a uni-modal size distribution (@0.55 μm) during NAD and a bimodal distribution (@0.32 and 1.8 μm) during AD, which was quite similar to the mass size distributions of particulate matter, WSOC, NO3(-), SO4(2-), and NH4(+) in both the NAD and AD samples. The size distribution characteristics and the results of the correlation analyses indicate that the sources of HULIS varied according to the particle size. In the fine mode (≤1.8 μm), the HULIS composition during the NAD period was strongly associated with secondary organic aerosol (SOA) formation processes similar to those of secondary ionic species (cloud processing and/or heterogeneous reactions) and primary emissions during the biomass burning period, and during the AD period, it was only associated with SOA formation. In the coarse mode (3.1-10 μm), it was difficult to identify the HULIS sources during the NAD period, and during the AD period, the HULIS was most likely associated with soil-related particles [Ca(NO3]2 and CaSO4) and/or sea-salt particles (NaNO3 and Na2SO4).
Sm-Nd, Rb-Sr, and Mn-Cr Ages of Yamato 74013
NASA Technical Reports Server (NTRS)
Nyquist, L. E.; Shih, C.- Y.; Reese, Y.D.
2009-01-01
Yamato 74013 is one of 29 paired diogenites having granoblastic textures. The Ar-39 - Ar-40 age of Y-74097 is approximately 1100 Ma. Rb-Sr and Sm-Nd analyses of Y-74013, -74037, -74097, and -74136 suggested that multiple young metamorphic events disturbed their isotopic systems. Masuda et al. reported that REE abundances were heterogeneous even within the same sample (Y-74010) for sample sizes less than approximately 2 g. Both they and Nyquist et al. reported data for some samples showing significant LREE enrichment. In addition to its granoblastic texture, Y-74013 is characterized by large, isolated clots of chromite up to 5 mm in diameter. Takeda et al. suggested that these diogenites originally represented a single or very small number of coarse orthopyroxene crystals that were recrystallized by shock processes. They further suggested that initial crystallization may have occurred very early within the deep crust of the HED parent body. Here we report the chronology of Y-74013 as recorded in chronometers based on long-lived Rb-87 and Sm-147, intermediate- lived Sm-146, and short-lived Mn-53.
Karyological features of wild and cultivated forms of myrtle (Myrtus communis, Myrtaceae).
Serçe, S; Ekbiç, E; Suda, J; Gündüz, K; Kiyga, Y
2010-03-09
Myrtle is an evergreen shrub or small tree widespread throughout the Mediterranean region. In Turkey, both cultivated and wild forms, differing in plant and fruit size and fruit composition, can be found. These differences may have resulted from the domestication of the cultivated form over a long period of time. We investigated whether wild and cultivated forms of myrtle differ in karyological features (i.e., number of somatic chromosomes and relative genome size). We sampled two wild forms and six cultivated types of myrtle. All the samples had the same chromosome number (2n = 2x = 22). The results were confirmed by 4',6-diamidino-2-phenylindole (DAPI) flow cytometry. Only negligible variation (approximately 3%) in relative fluorescence intensity was observed among the different myrtle accessions, with wild genotypes having the smallest values. We concluded that despite considerable morphological differentiation, cultivated and wild myrtle genotypes in Turkey have similar karyological features.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance
Wang, Weichen
2017-01-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
2010-01-01
Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152
Composition and Structure of a l930s-Era Pine-Hardwood Stand in Arkansas
Don C. Bragg
2004-01-01
This paper describes an unmanaged 1930s-era pine-hardwood stand on a minor stream terrace in Ashley County, AR. Probably inventoried as a part of an early growth and yield study, the sample plot was approximately 3.2 ha in size and contained at least 21 tree species. Loblolly pine comprised 39.1% of all stems, followed by willow oak (12.7%), winged elm (9.6%), sweetgum...
Bayesian inference based on stationary Fokker-Planck sampling.
Berrones, Arturo
2010-06-01
A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
Insight into Primordial Solar System Oxygen Reservoirs from Returned Cometary Samples
NASA Technical Reports Server (NTRS)
Brownlee, D. E.; Messenger, S.
2004-01-01
The recent successful rendezvous of the Stardust spacecraft with comet Wild-2 will be followed by its return of cometary dust to Earth in January 2006. Results from two separate dust impact detectors suggest that the spacecraft collected approximately the nominal fluence of at least 1,000 particles larger than 15 micrometers in size. While constituting only about one microgram total, these samples will be sufficient to answer many outstanding questions about the nature of cometary materials. More than two decades of laboratory studies of stratospherically collected interplanetary dust particles (IDPs) of similar size have established the necessary microparticle handling and analytical techniques necessary to study them. It is likely that some IDPs are in fact derived from comets, although complex orbital histories of individual particles have made these assignments difficult to prove. Analysis of bona fide cometary samples will be essential for answering some fundamental outstanding questions in cosmochemistry, such as (1) the proportion of interstellar and processed materials that comprise comets and (2) whether the Solar System had a O-16-rich reservoir. Abundant silicate stardust grains have recently been discovered in anhydrous IDPs, in far greater abundances (200 5,500 ppm) than those in meteorites (25 ppm). Insight into the more subtle O isotopic variations among chondrites and refractory phases will require significantly higher precision isotopic measurements on micrometer-sized samples than are currently available.
The Effect of Oat Fibre Powder Particle Size on the Physical Properties of Wheat Bread Rolls
Kurek, Marcin; Wyrwisz, Jarosław; Piwińska, Monika; Wierzbicka, Agnieszka
2016-01-01
Summary In response to the growing interest of modern society in functional food products, this study attempts to develop a bakery product with high dietary fibre content added in the form of an oat fibre powder. Oat fibre powder with particle sizes of 75 µm (OFP1) and 150 µm (OFP2) was used, substituting 4, 8, 12, 16 and 20% of the flour. The physical properties of the dough and the final bakery products were then measured. Results indicated that dough with added fibre had higher elasticity than the control group. The storage modulus values of dough with OFP1 most closely approximated those of the control group. The addition of OFP1 did not affect significantly the colour compared to the other samples. Increasing the proportion of oat fibre powder resulted in increased firmness, which was most prominent in wheat bread rolls with oat fibre powder of smaller particle sizes. The addition of oat fibre powder with smaller particles resulted in a product with the rheological and colour parameters that more closely resembled control sample. PMID:27904392
Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai
2014-11-10
Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.
Chemical Characterization of an Envelope A Sample from Hanford Tank 241-AN-103
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, M.S.
2000-08-23
A whole tank composite sample from Hanford waste tank 241-AN-103 was received at the Savannah River Technology Center (SRTC) and chemically characterized. Prior to characterization the sample was diluted to {approximately}5 M sodium concentration. The filtered supernatant liquid, the total dried solids of the diluted sample, and the washed insoluble solids obtained from filtration of the diluted sample were analyzed. A mass balance calculation of the three fractions of the sample analyzed indicate the analytical results appear relatively self-consistent for major components of the sample. However, some inconsistency was observed between results where more than one method of determination wasmore » employed and for species present in low concentrations. A direct comparison to previous analyses of material from tank 241-AN-103 was not possible due to unavailability of data for diluted samples of tank 241-AN-103 whole tank composites. However, the analytical data for other types of samples from 241-AN-103 we re mathematically diluted and compare reasonably with the current results. Although the segments of the core samples used to prepare the sample received at SRTC were combined in an attempt to produce a whole tank composite, determination of how well the results of the current analysis represent the actual composition of the Hanford waste tank 241-AN-103 remains problematic due to the small sample size and the large size of the non-homogenized waste tank.« less
Simultaneous small- and wide-angle scattering at high X-ray energies.
Daniels, J E; Pontoni, D; Hoo, Rui Ping; Honkimäki, V
2010-07-01
Combined small- and wide-angle X-ray scattering (SAXS/WAXS) is a powerful technique for the study of materials at length scales ranging from atomic/molecular sizes (a few angstroms) to the mesoscopic regime ( approximately 1 nm to approximately 1 microm). A set-up to apply this technique at high X-ray energies (E > 50 keV) has been developed. Hard X-rays permit the execution of at least three classes of investigations that are significantly more difficult to perform at standard X-ray energies (8-20 keV): (i) in situ strain analysis revealing anisotropic strain behaviour both at the atomic (WAXS) as well as at the mesoscopic (SAXS) length scales, (ii) acquisition of WAXS patterns to very large q (>20 A(-1)) thus allowing atomic pair distribution function analysis (SAXS/PDF) of micro- and nano-structured materials, and (iii) utilization of complex sample environments involving thick X-ray windows and/or samples that can be penetrated only by high-energy X-rays. Using the reported set-up a time resolution of approximately two seconds was demonstrated. It is planned to further improve this time resolution in the near future.
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
Water and acid soluble trace metals in atmospheric particles
NASA Technical Reports Server (NTRS)
Lindberg, S. E.; Harriss, R. C.
1983-01-01
Continental aerosols are collected above a deciduous forest in eastern Tennessee and subjected to selective extractions to determine the water-soluble and acid-leachable concentrations of Cd, Mn, Pb, and Zn. The combined contributions of these metals to the total aerosol mass is 0.5 percent, with approximately 70 percent of this attributable to Pb alone. A substantial fraction (approximately 50 percent or more) of the acid-leachable metals is soluble in distilled water. In general, this water-soluble fraction increases with decreasing particle size and with increasing frequency of atmospheric water vapor saturation during the sampling period. The pattern of relative solubilities (Zn being greater than Mn, which is approximately equal to Cd, which is greater than Pb) is found to be similar to the general order of the thermodynamic solubilities of the most probable salts of these elements in continental aerosols with mixed fossil fuel and soil sources.
Scarnato, B. V.; China, S.; Nielsen, K.; ...
2015-06-25
Field observations show that individual aerosol particles are a complex mixture of a wide variety of species, reflecting different sources and physico-chemical transformations. The impacts of individual aerosol morphology and mixing characteristics on the Earth system are not yet fully understood. Here we present a sensitivity study on climate-relevant aerosols optical properties to various approximations. Based on aerosol samples collected in various geographical locations, we have observationally constrained size, morphology and mixing, and accordingly simulated, using the discrete dipole approximation model (DDSCAT), optical properties of three aerosols types: (1) bare black carbon (BC) aggregates, (2) bare mineral dust, and (3)more » an internal mixture of a BC aggregate laying on top of a mineral dust particle, also referred to as polluted dust. DDSCAT predicts optical properties and their spectral dependence consistently with observations for all the studied cases. Predicted values of mass absorption, scattering and extinction coefficients (MAC, MSC, MEC) for bare BC show a weak dependence on the BC aggregate size, while the asymmetry parameter ( g) shows the opposite behavior. The simulated optical properties of bare mineral dust present a large variability depending on the modeled dust shape, confirming the limited range of applicability of spheroids over different types and size of mineral dust aerosols, in agreement with previous modeling studies. The polluted dust cases show a strong decrease in MAC values with the increase in dust particle size (for the same BC size) and an increase of the single scattering albedo (SSA). Furthermore, particles with a radius between 180 and 300 nm are characterized by a decrease in SSA values compared to bare dust, in agreement with field observations.This paper demonstrates that observationally constrained DDSCAT simulations allow one to better understand the variability of the measured aerosol optical properties in ambient air and to define benchmark biases due to different approximations in aerosol parametrization.« less
Li, Peng; Redden, David T.
2014-01-01
SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738
On the siting of gases shock-emplaced from internal cavities in basalt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiens, R.C.
1988-12-01
Noble gases were extracted by stepped combustion and crushing from basalts which contained gas-filled cavities of controlled sizes prior to shock at 40 GPa. Analysis of fractions enriched and depleted in shock glass from a single sample gave a factor of 2 higher gas abundances in the glass-rich separate. Release patterns were nearly identical, suggesting similar siting (in glass) in both fractions. Crushing of a sample released {approximately}45% of implanted noble gases, but only {approximately}17% of N{sub 2}, indicating that most or all of the noble gas was trapped in vesicles. Analysis by SEM/EDS confirmed the presence of vesicles inmore » glassy areas, with an average diameter of {approximately}10 {mu}m. Samples with relatively large pre-shock cavities were found to consist of up to 70-80% glass locally and generally exhibit greater local shock effects than solid and densely-packed particulate targets at the same shock pressure, though the latter give higher glass emplacement efficiencies. The petrographic results indicate that in situ production of glassy pockets grossly similar to those in the shergottite EETA 79001 is possible from shock reverberations in the vicinity of a vug. However, the siting of the gases points to a more complex scenario, in which SPB gas and melt material were probably injected into EETA 79001.« less
Bhilocha, Shardul; Amin, Ripal; Pandya, Monika; Yuan, Han; Tank, Mihir; LoBello, Jaclyn; Shytuhina, Anastasia; Wang, Wenlan; Wisniewski, Hans-Georg; de la Motte, Carol; Cowman, Mary K.
2011-01-01
Agarose and polyacrylamide gel electrophoresis systems for the molecular mass-dependent separation of hyaluronan (HA) in the size range of approximately 5–500 kDa have been investigated. For agarose-based systems, the suitability of different agarose types, agarose concentrations, and buffers systems were determined. Using chemoenzymatically synthesized HA standards of low polydispersity, the molecular mass range was determined for each gel composition, over which the relationship between HA mobility and logarithm of the molecular mass was linear. Excellent linear calibration was obtained for HA molecular mass as low as approximately 9 kDa in agarose gels. For higher resolution separation, and for extension to molecular masses as low as approximately 5 kDa, gradient polyacrylamide gels were superior. Densitometric scanning of stained gels allowed analysis of the range of molecular masses present in a sample, and calculation of weight-average and number-average values. The methods were validated for polydisperse HA samples with viscosity-average molecular masses of 112, 59, 37, and 22 kDa, at sample loads of 0.5 µg (for polyacrylamide) to 2.5 µg (for agarose). Use of the methods for electrophoretic mobility shift assays was demonstrated for binding of the HA-binding region of aggrecan (recombinant human aggrecan G1-IGD-G2 domains) to a 150 kDa HA standard. PMID:21684248
Chen, Kang-Shin; Wang, Hsin-Kai; Peng, Yen-Ping; Wang, Wen-Cheng; Chen, Chia-Hsiu; Lai, Chia-Hsiang
2008-10-01
The sizes and concentrations of 21 atmospheric polycyclic aromatic hydrocarbons (PAHs) were measured at Jhu-Shan (a rural site) and Sin-Gang (a town site) in central Taiwan in October and December 2005. Air samples were collected using semi-volatile sampling trains (PS-1 sampler) over 16 days for rice-straw burning and nonburning periods. These samples were then analyzed using a gas chromatograph with a flame-ionization detector (GC/FID). Particle-size distributions in the particulate phase show a bimode, peaking at 0.32-0.56 microm and 3.2-5.6 microm at the two sites during the nonburning period. During the burning period, peaks also appeared at 0.32-0.56 microm and 3.2-5.6 microm at Jhu-Shan, with the accumulation mode (particle size between 0.1 and 3.2 microm) accounting for approximately 74.1% of total particle mass. The peaks at 0.18-0.32 microm and 1.8-3.2 microm at Shin-Gang had an accumulation mode accounting for approximately 70.1% of total particle mass. The mass median diameter (MMD) of 3.99-4.35 microm in the particulate phase suggested that rice-straw burning generated increased numbers of coarse particles. The concentrations of total PAHs (sum of 21 gases + particles) at the Jhu-Shan site (Sin-Gang site) were 522.9 +/- 111.4 ng/ml (572.0 +/- 91.0 ng/ml) and 330.1 +/- 17.0 ng/ml (or 427.5 +/- 108.0 ng/ml) during burning and nonburning periods, respectively, accounting for a roughly 58% (or 34%) increase in the concentrations of total PAHs due to rice-straw burning. On average, low-weight PAHs (about 87.0%) represent the largest proportion of total PAHs, followed by medium-weight PAHs (7.1%), and high-weight PAHs (5.9%). Combustion-related PAHs during burning periods were 1.54-2.57 times higher than those during nonburning periods. The results of principal component analysis (PCA)/absolute principal component scores (APCS) suggest that the primary pollution sources at the two sites are similar and include vehicle exhaust, coal/wood combustion, incense burning, and incineration emissions. Open burning of rice straw was estimated to contribute approximately 5.0-33.5% to the total atmospheric PAHs at the two sites.
Particle concentration in the asteroid belt from Pioneer 10
NASA Technical Reports Server (NTRS)
Soberman, R. K.; Neste, S. L.; Lichtenfeld, K.
1974-01-01
The spatial concentration and size distribution for particles measured by the asteroid/meteoroid detector on Pioneer 10 between 2 and 3.5 AU are presented. The size distribution is from about 35 micrometers to 10 centimeters. The exponent of the size dependence varies from approximately -1.7 for the smallest to approximately -3.0 for the largest size measured.
Particle concentration in the asteroid belt from pioneer 10.
Soberman, R K; Neste, S L; Lichtenfeld, K
1974-01-25
The spatial concentration and size distribution for particles measured by the asteroid/meteoroid detector on Pioneer 10 between 2 and 3.5 astronomical units are presented. The size distribution is from about 35 micrometers to 10 centimeters. The exponent of the size dependence varies from approximately -1.7 for the smallest to approximately -3.0 for the largest size measured.
A variable-step-size robust delta modulator.
NASA Technical Reports Server (NTRS)
Song, C. L.; Garodnick, J.; Schilling, D. L.
1971-01-01
Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.
Nudds, Robert L.; Taylor, Graham K.; Thomas, Adrian L. R.
2004-01-01
The wing kinematics of birds vary systematically with body size, but we still, after several decades of research, lack a clear mechanistic understanding of the aerodynamic selection pressures that shape them. Swimming and flying animals have recently been shown to cruise at Strouhal numbers (St) corresponding to a regime of vortex growth and shedding in which the propulsive efficiency of flapping foils peaks (St approximately fA/U, where f is wingbeat frequency, U is cruising speed and A approximately bsin(theta/2) is stroke amplitude, in which b is wingspan and theta is stroke angle). We show that St is a simple and accurate predictor of wingbeat frequency in birds. The Strouhal numbers of cruising birds have converged on the lower end of the range 0.2 < St < 0.4 associated with high propulsive efficiency. Stroke angle scales as theta approximately 67b-0.24, so wingbeat frequency can be predicted as f approximately St.U/bsin(33.5b-0.24), with St0.21 and St0.25 for direct and intermittent fliers, respectively. This simple aerodynamic model predicts wingbeat frequency better than any other relationship proposed to date, explaining 90% of the observed variance in a sample of 60 bird species. Avian wing kinematics therefore appear to have been tuned by natural selection for high aerodynamic efficiency: physical and physiological constraints upon wing kinematics must be reconsidered in this light. PMID:15451698
X-Ray Diffraction on Mars: Scientific Discoveries Made by the CheMin Instrument
NASA Technical Reports Server (NTRS)
Rampe, E. B.; Blake, D. F.; Ming, D. W.; Bristow, T. F.
2017-01-01
The Mars Science Laboratory Curiosity landed in Gale crater in August 2012 with the goal to identify and characterize habitable environments on Mars. Curiosity has been studying a series of sedimentary rocks primarily deposited in fluviolacustrine environments approximately 3.5 Ga. Minerals in the rocks and soils on Mars can help place further constraints on these ancient aqueous environments, including pH, salinity, and relative duration of liquid water. The Chemistry and Mineralogy (CheMin) X-ray diffraction and X-ray fluorescence instrument on Curiosity uses a Co X-ray source and charge-coupled device detector in transmission geometry to collect 2D Debye-Scherrer ring patterns of the less than 150 micron size fraction of drilled rock powders or scooped sediments. With an angular range of approximately 2.52deg 20 and a 20 resolution of approximately 0.3deg, mineral abundances can be quantified with a detection limit of approximately 1-2 wt. %. CheMin has returned quantitative mineral abundances from 16 mudstone, sandstone, and aeolian sand samples so far. The mineralogy of these samples is incredibly diverse, suggesting a variety of depositional and diagenetic environments and different source regions for the sediments. Results from CheMin have been essential for reconstructing the geologic history of Gale crater and addressing the question of habitability on ancient Mars.
NASA Astrophysics Data System (ADS)
Long, Yin; Zhang, Xiao-Jun; Wang, Kui
2018-05-01
In this paper, convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks (RBDNs) are studied. First, we find and demonstrate that the average degree is convergent in the form of power law. Meanwhile, we discover that the ratios of the back items to front items of convergent reminder are independent of network link number for large network size, and we theoretically prove that the limit of the ratio is a constant. Moreover, since it is difficult to calculate the analytical solution of the average degree for large network sizes, we adopt numerical method to obtain approximate expression of the average degree to approximate its analytical solution. Finally, simulations are presented to verify our theoretical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Bin, E-mail: bins@ieee.org
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior.more » To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.« less
Resuspension of soil as a source of airborne lead near industrial facilities and highways.
Young, Thomas M; Heeraman, Deo A; Sirin, Gorkem; Ashbaugh, Lowell L
2002-06-01
Geologic materials are an important source of airborne particulate matter less than 10 microm aerodynamic diameter (PM10), but the contribution of contaminated soil to concentrations of Pb and other trace elements in air has not been documented. To examine the potential significance of this mechanism, surface soil samples with a range of bulk soil Pb concentrations were obtained near five industrial facilities and along roadsides and were resuspended in a specially designed laboratory chamber. The concentration of Pb and other trace elements was measured in the bulk soil, in soil size fractions, and in PM10 generated during resuspension of soils and fractions. Average yields of PM10 from dry soils ranged from 0.169 to 0.869 mg of PM10/g of soil. Yields declined approximately linearly with increasing geometric mean particle size of the bulk soil. The resulting PM10 had average Pb concentrations as high as 2283 mg/kg for samples from a secondary Pb smelter. Pb was enriched in PM10 by 5.36-88.7 times as compared with uncontaminated California soils. Total production of PM10 bound Pb from the soil samples varied between 0.012 and 1.2 mg of Pb/kg of bulk soil. During a relatively large erosion event, a contaminated site might contribute approximately 300 ng/m3 of PM10-bound Pb to air. Contribution of soil from contaminated sites to airborne element balances thus deserves consideration when constructing receptor models for source apportionment or attempting to control airborne Pb emissions.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Heavy metals in the finest size fractions of road-deposited sediments.
Lanzerstorfer, Christof
2018-08-01
The concentration of heavy metals in urban road-deposited sediments (RDS) can be used as an indicator for environmental pollution. Thus, their occurrence has been studied in whole road dust samples as well as in size fractions obtained by sieving. Because of the limitations of size separation by sieving little information is available about heavy metal concentrations in the road dust size fractions <20 μm. In this study air classification was applied for separation of dust size fractions smaller than 20 μm from RDS collected at different times during the year. The results showed only small seasonal variations in the heavy metals concentrations and size distribution. According to the Geoaccumulation Index the pollution of the road dust samples deceased in the following order: Sb » As > Cu ≈ Zn > Cr > Cd ≈ Pb ≈ Mn > Ni > Co ≈ V. For all heavy metals the concentration was higher in the fine size fractions compared to the coarse size fractions, while the concentration of Sr was size-independent. The enrichment of the heavy metals in the finest size fraction compared to the whole RDS <200 μm was up to 4.5-fold. The size dependence of the concentration decreased in the following order: Co ≈ Cd > Sb > (Cu) ≈ Zn ≈ Pb > As ≈ V » Mn. The approximation of the size dependence of the concentration as a function of the particle size by power functions worked very well. The correlation between particle size and concentration was high for all heavy metals. The increased heavy metals concentrations in the finest size fractions should be considered in the evaluation of the contribution of road dust re-suspension to the heavy metal contamination of atmospheric dust. Thereby, power functions can be used to describe the size dependence of the concentration. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khattak, Khanzadi Fatima
2012-06-01
Whole plant of Fagonia arabica with 3 different particle sizes (30, 50 and 70 mesh) were exposed to gamma radiation doses of 1-10 kGy from a Cobalt 60 source. A series of tests was performed in order to check the feasibility of irradiation processing of the plant. The applied radiation doses did not affect (P<0.05) pH and antimicrobial activities of the plant. The total weight of the dry extracts in methanol as well as water was found increased with irradiation. The irradiated samples showed significant increase in phenolic content and free radical scavenging activity using DPPH. Shortly after irradiation (on the day of radiation treatment) high amounts of free radicals were detected in the irradiated plant samples and the chemiluminescence measurements were generally found to be dose dependent. Maximum luminescence intensity was observed in case of samples with mesh size of 30 for all the radiation doses applied. After a period of one month the chemiluminescence signals of the irradiated samples approximated those of the controls. The study suggests that gamma irradiation treatment is effective for quality improvement and enhances certain beneficial biological properties of the treated materials.
Reference interval computation: which method (not) to choose?
Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C
2012-07-11
When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
Containerless processing of undercooled melts
NASA Technical Reports Server (NTRS)
Shong, D. S.; Graves, J. A.; Ujiie, Y.; Perepezko, J. H.
1987-01-01
Containerless drop tube processing allows for significant levels of liquid undercooling through control of parameters such as sample size, surface coating and cooling rate. A laboratory scale (3 m) drop tube has been developed which allows the undercooling and solidification behavior of powder samples to be evaluated under low gravity free-fall conditions. The level of undercooling obtained in an InSb-Sb eutectic alloy has been evaluated by comparing the eutectic spacing in drop tube samples with a spacing/undercooling relationship established using thermal analysis techniques. Undercoolings of 0.17 and 0.23 T(e) were produced by processing under vacuum and He gas conditions respectively. Alternatively, the formation of an amorphous phase in a Ni-Nb eutectic alloy indicates that undercooling levels of approximately 500 C were obtained by drop tube processing. The influence of droplet size and gas environment on undercooling behavior in the Ni-Nb eutectic was evaluated through their effect on the amorphous/crystalline phase ratio. To supplement the structural analysis, heat flow modeling has been developed to describe the undercooling history during drop tube processing, and the model has been tested experimentally.
Multicategory nets of single-layer perceptrons: complexity and sample-size issues.
Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras
2010-05-01
The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.
Hsu, Cheng-Liang; Lin, Yu-Hong; Wang, Liang-Kai; Hsueh, Ting-Jen; Chang, Sheng-Po; Chang, Shoou-Jinn
2017-05-03
UV- and visible-light photoresponse was achieved via p-type K-doped ZnO nanowires and nanosheets that were hydrothermally synthesized on an n-ZnO/glass substrate and peppered with Au nanoparticles. The K content of the p-ZnO nanostructures was 0.36 atom %. The UV- and visible-light photoresponse of the p-ZnO nanostructures/n-ZnO sample was roughly 2 times higher than that of the ZnO nanowires. The Au nanoparticles of various densities and diameter sizes were deposited on the p-ZnO nanostructures/n-ZnO samples by a simple UV photochemical reaction method yielding a tunable and enhanced UV- and visible-light photoresponse. The maximum UV and visible photoresponse of the Au nanoparticle sample was obtained when the diameter size of the Au nanoparticle was approximately 5-35 nm. On the basis of the localized surface plasmon resonance effect, the UV, blue, and green photocurrent/dark current ratios of Au nanoparticle/p-ZnO nanostructures/n-ZnO are ∼1165, ∼94.6, and ∼9.7, respectively.
Problems in determining the surface density of the Galactic disk
NASA Technical Reports Server (NTRS)
Statler, Thomas S.
1989-01-01
A new method is presented for determining the local surface density of the Galactic disk from distance and velocity measurements of stars toward the Galactic poles. The procedure is fully three-dimensional, approximating the Galactic potential by a potential of Staeckel form and using the analytic third integral to treat the tilt and the change of shape of the velocity ellipsoid consistently. Applying the procedure to artificial data superficially resembling the K dwarf sample of Kuijken and Gilmore (1988, 1989), it is shown that the current best estimates of local disk surface density are uncertain by at least 30 percent. Of this, about 25 percent is due to the size of the velocity sample, about 15 percent comes from uncertainties in the rotation curve and the solar galactocentric distance, and about 10 percent from ignorance of the shape of the velocity distribution above z = 1 kpc, the errors adding in quadrature. Increasing the sample size by a factor of 3 will reduce the error to 20 percent. To achieve 10 percent accuracy, observations will be needed along other lines of sight to constrain the shape of the velocity ellipsoid.
Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek
2016-01-01
Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851
A new look at the Lake Superior biomass size spectrum
Yurista, Peder M.; Yule, Daniel L.; Balge, Matt; VanAlstine, Jon D.; Thompson, Jo A.; Gamble, Allison E.; Hrabik, Thomas R.; Kelly, John R.; Stockwell, Jason D.; Vinson, Mark
2014-01-01
We synthesized data from multiple sampling programs and years to describe the Lake Superior pelagic biomass size structure. Data consisted of Coulter counts for phytoplankton, optical plankton counts for zooplankton, and acoustic surveys for pelagic prey fish. The size spectrum was stable across two time periods separated by 5 years. The primary scaling or overall slope of the normalized biomass size spectra for the combined years was −1.113, consistent with a previous estimate for Lake Superior (−1.10). Periodic dome structures within the overall biomass size structure were fit to polynomial regressions based on the observed sub-domes within the classical taxonomic positions (algae, zooplankton, and fish). This interpretation of periodic dome delineation was aligned more closely with predator–prey size relationships that exist within the zooplankton (herbivorous, predacious) and fish (planktivorous, piscivorous) taxonomic positions. Domes were spaced approximately every 3.78 log10 units along the axis and with a decreasing peak magnitude of −4.1 log10 units. The relative position of the algal and herbivorous zooplankton domes predicted well the subsequent biomass domes for larger predatory zooplankton and planktivorous prey fish.
Key to enhance thermoelectric performance by controlling crystal size of strontium titanate
NASA Astrophysics Data System (ADS)
Wang, Jun; Ye, Xinxin; Yaer, Xinba; Wu, Yin; Zhang, Boyu; Miao, Lei
2015-09-01
One-step molten salt synthesis process was introduced to fabricate nano to micrometer sized SrTiO3 powders in which effects of synthesis temperature, oxide-to-flux ratios and raw materials on the generation of SrTiO3 powders were examined. 100 nm or above sized pure SrTiO3 particles were obtained at relatively lower temperature of 900∘C. Micro-sized rhombohedral crystals with a maximum size of approximately 12 μm were obtained from SrCO3 or Sr(NO3)2 strontium source with 1:1 O/S ratio. Controlled crystal size and morphology of Nb-doped SrTiO3 particles are prepared by using this method to confirm the performance of thermoelectric properties. The Seebeck coefficient obtained is significantly high when compared with the reported data, and the high ratio of nano particles in the sample has a positive effect on the increase of Seebeck coefficient too, which is likely due to the energy filtering effect at large numbers of grain boundaries resulting from largely distributed structure.
Burgess, George H.; Bruce, Barry D.; Cailliet, Gregor M.; Goldman, Kenneth J.; Grubbs, R. Dean; Lowe, Christopher G.; MacNeil, M. Aaron; Mollet, Henry F.; Weng, Kevin C.; O'Sullivan, John B.
2014-01-01
White sharks are highly migratory and segregate by sex, age and size. Unlike marine mammals, they neither surface to breathe nor frequent haul-out sites, hindering generation of abundance data required to estimate population size. A recent tag-recapture study used photographic identifications of white sharks at two aggregation sites to estimate abundance in “central California” at 219 mature and sub-adult individuals. They concluded this represented approximately one-half of the total abundance of mature and sub-adult sharks in the entire eastern North Pacific Ocean (ENP). This low estimate generated great concern within the conservation community, prompting petitions for governmental endangered species designations. We critically examine that study and find violations of model assumptions that, when considered in total, lead to population underestimates. We also use a Bayesian mixture model to demonstrate that the inclusion of transient sharks, characteristic of white shark aggregation sites, would substantially increase abundance estimates for the adults and sub-adults in the surveyed sub-population. Using a dataset obtained from the same sampling locations and widely accepted demographic methodology, our analysis indicates a minimum all-life stages population size of >2000 individuals in the California subpopulation is required to account for the number and size range of individual sharks observed at the two sampled sites. Even accounting for methodological and conceptual biases, an extrapolation of these data to estimate the white shark population size throughout the ENP is inappropriate. The true ENP white shark population size is likely several-fold greater as both our study and the original published estimate exclude non-aggregating sharks and those that independently aggregate at other important ENP sites. Accurately estimating the central California and ENP white shark population size requires methodologies that account for biases introduced by sampling a limited number of sites and that account for all life history stages across the species' range of habitats. PMID:24932483
Burgess, George H; Bruce, Barry D; Cailliet, Gregor M; Goldman, Kenneth J; Grubbs, R Dean; Lowe, Christopher G; MacNeil, M Aaron; Mollet, Henry F; Weng, Kevin C; O'Sullivan, John B
2014-01-01
White sharks are highly migratory and segregate by sex, age and size. Unlike marine mammals, they neither surface to breathe nor frequent haul-out sites, hindering generation of abundance data required to estimate population size. A recent tag-recapture study used photographic identifications of white sharks at two aggregation sites to estimate abundance in "central California" at 219 mature and sub-adult individuals. They concluded this represented approximately one-half of the total abundance of mature and sub-adult sharks in the entire eastern North Pacific Ocean (ENP). This low estimate generated great concern within the conservation community, prompting petitions for governmental endangered species designations. We critically examine that study and find violations of model assumptions that, when considered in total, lead to population underestimates. We also use a Bayesian mixture model to demonstrate that the inclusion of transient sharks, characteristic of white shark aggregation sites, would substantially increase abundance estimates for the adults and sub-adults in the surveyed sub-population. Using a dataset obtained from the same sampling locations and widely accepted demographic methodology, our analysis indicates a minimum all-life stages population size of >2000 individuals in the California subpopulation is required to account for the number and size range of individual sharks observed at the two sampled sites. Even accounting for methodological and conceptual biases, an extrapolation of these data to estimate the white shark population size throughout the ENP is inappropriate. The true ENP white shark population size is likely several-fold greater as both our study and the original published estimate exclude non-aggregating sharks and those that independently aggregate at other important ENP sites. Accurately estimating the central California and ENP white shark population size requires methodologies that account for biases introduced by sampling a limited number of sites and that account for all life history stages across the species' range of habitats.
Lee, K V; Moon, R D; Burkness, E C; Hutchison, W D; Spivak, M
2010-08-01
The parasitic mite Varroa destructor Anderson & Trueman (Acari: Varroidae) is arguably the most detrimental pest of the European-derived honey bee, Apis mellifera L. Unfortunately, beekeepers lack a standardized sampling plan to make informed treatment decisions. Based on data from 31 commercial apiaries, we developed sampling plans for use by beekeepers and researchers to estimate the density of mites in individual colonies or whole apiaries. Beekeepers can estimate a colony's mite density with chosen level of precision by dislodging mites from approximately to 300 adult bees taken from one brood box frame in the colony, and they can extrapolate to mite density on a colony's adults and pupae combined by doubling the number of mites on adults. For sampling whole apiaries, beekeepers can repeat the process in each of n = 8 colonies, regardless of apiary size. Researchers desiring greater precision can estimate mite density in an individual colony by examining three, 300-bee sample units. Extrapolation to density on adults and pupae may require independent estimates of numbers of adults, of pupae, and of their respective mite densities. Researchers can estimate apiary-level mite density by taking one 300-bee sample unit per colony, but should do so from a variable number of colonies, depending on apiary size. These practical sampling plans will allow beekeepers and researchers to quantify mite infestation levels and enhance understanding and management of V. destructor.
Are wildlife detector dogs or people better at finding Desert Tortoises (Gopherus agassizii)?
Nussear, K.E.; Esque, T.C.; Heaton, J.S.; Cablk, Mary E.; Drake, K.K.; Valentin, C.; Yee, J.L.; Medica, P.A.
2008-01-01
Our ability to study threatened and endangered species depends on locating them readily in the field. Recent studies highlight the effectiveness of trained detector dogs to locate wildlife during field surveys, including Desert Tortoises in a semi-natural setting. Desert Tortoises (Gopherus agassizii) are cryptic and difficult to detect during surveys, especially the smaller size classes. We conducted comparative surveys to determine whether human or detector dog teams were more effective at locating Desert Tortoises in the wild. We compared detectability of Desert Tortoises and the costs to deploy human and dog search teams. Detectability of tortoises was not statistically different for either team, and was estimated to be approximately 70% (SE = 5%). Dogs found a greater proportion of tortoises located in vegetation than did humans. The dog teams finished surveys 2.5 hours faster than the humans on average each day. The human team cost was approximately $3,000 less per square kilometer sampled. Dog teams provided a quick and effective method for surveying for adult Desert Tortoises; however, we were unable to determine-their effectiveness at locating smaller size classes. Detection of smaller size classes during surveys would improve management of the species and should be addressed by future research using Desert Tortoise detector dogs.
The impact of sample non-normality on ANOVA and alternative methods.
Lantz, Björn
2013-05-01
In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.
Cleaning Study of Genesis Sample 60487
NASA Technical Reports Server (NTRS)
Kuhlman, Kim R.; Rodriquez, M. C.; Gonzalez, C. P.; Allton, J. H.; Burnett, D. S.
2013-01-01
The Genesis mission collected solar wind and brought it back to Earth in order to provide precise knowledge of solar isotopic and elemental compositions. The ions in the solar wind were stopped in the collectors at depths on the order of 10 to a few hundred nanometers. This shallow implantation layer is critical for scientific analysis of the composition of the solar wind and must be preserved throughout sample handling, cleaning, processing, distribution, preparation and analysis. Particles of Genesis wafers, brine from the Utah Testing Range and an organic film have deleterious effects on many of the high-resolution instruments that have been developed to analyze the implanted solar wind. We have conducted a correlative microscopic study of the efficacy of cleaning Genesis samples with megasonically activated ultrapure water and UV/ozone cleaning. Sample 60487, the study sample, is a piece of float-zone silicon from the B/C array approximately 4.995mm x 4.145 mm in size
Data splitting for artificial neural networks using SOM-based stratified sampling.
May, R J; Maier, H R; Dandy, G C
2010-03-01
Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.
The effect of char structure on burnout during pulverized coal combustion at pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, G.; Wu, H.; Benfell, K.E.
An Australian bituminous coal sample was burnt in a drop tube furnace (DTF) at 1 atm and a pressurized drop tube furnace (PDTF) at 15 atm. The char samples were collected at different burnout levels, and a scanning electron microscope was used to examine the structures of chars. A model was developed to predict the burnout of char particles with different structures. The model accounts for combustion of the thin-walled structure of cenospheric char and its fragmentation during burnout. The effect of pressure on reaction rate was also considered in the model. As a result, approximately 40% and 70% cenosphericmore » char particles were observed in the char samples collected after coal pyrolysis in the DTF and PDTF respectively. A large number of fine particles (< 30 mm) were observed in the 1 atm char samples at burnout levels between 30% and 50%, which suggests that significant fragmentation occurred during early combustion. Ash particle size distributions show that a large number of small ash particles formed during burnout at high pressure. The time needed for 70% char burnout at 15 atm is approximately 1.6 times that at 1 atm under the same temperature and gas environment conditions, which is attributed to the different pressures as well as char structures. The overall reaction rate for cenospheric char was predicted to be approximately 2 times that of the dense chars, which is consistent with previous experimental results. The predicted char burnout including char structures agrees reasonably well with the experimental measurements that were obtained at 1 atm and 15 atm pressures.« less
Ober, Allison J; Sussell, Jesse; Kilmer, Beau; Saunders, Jessica; Heckathorn, Douglas D
2016-04-01
Violent drug markets are not as prominent as they once were in the United States, but they still exist and are associated with significant crime and lower quality of life. The drug market intervention (DMI) is an innovative strategy that uses focused deterrence, community engagement, and incapacitation to reduce crime and disorder associated with these markets. Although studies show that DMI can reduce crime and overt drug activity, one perspective is prominently missing from these evaluations: those who purchase drugs. This study explores the use of respondent-driven sampling (RDS)-a statistical sampling method-to approximate a representative sample of drug users who purchased drugs in a targeted DMI market to gain insight into the effect of a DMI on market dynamics. Using RDS, we recruited individuals who reported hard drug use (crack or powder cocaine, heroin, methamphetamine, or illicit use of prescriptions opioids) in the last month to participate in a survey. The main survey asked about drug use, drug purchasing, and drug market activity before and after DMI; a secondary survey asked about network characteristics and recruitment. Our sample of 212 respondents met key RDS assumptions, suggesting that the characteristics of our weighted sample approximate the characteristics of the drug user network. The weighted estimates for market purchasers are generally valid for inferences about the aggregate population of customers, but a larger sample size is needed to make stronger inferences about the effects of a DMI on drug market activity. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
L. Tan; J. T. Busby; H. J. M. Chichester
2013-06-01
An optimized thermomechanical treatment (TMT) applied to austenitic alloy 800H (Fe-21Cr-32Ni) had shown significant improvements in corrosion resistance and basic mechanical properties. This study examined its effect on radiation resistance by irradiating both the solution-annealed (SA) and TMT samples at 500 degrees C for 3 dpa. Microstructural characterization using transmission electron microscopy revealed that the radiation-induced Frank loops, voids, and y'-Ni3(Ti,Al) precipitates had similar sizes between the SA and TMT samples. The amounts of radiation-induced defects and more significantly y' precipitates, however, were reduced in the TMT samples. These reductions would approximately reduce by 40.9% the radiation hardening compared tomore » the SA samples. This study indicates that optimized-TMT is an economical approach for effective overall property improvements.« less
Khondoker, Mizanur; Dobson, Richard; Skirrow, Caroline; Simmons, Andrew; Stahl, Daniel
2016-10-01
Recent literature on the comparison of machine learning methods has raised questions about the neutrality, unbiasedness and utility of many comparative studies. Reporting of results on favourable datasets and sampling error in the estimated performance measures based on single samples are thought to be the major sources of bias in such comparisons. Better performance in one or a few instances does not necessarily imply so on an average or on a population level and simulation studies may be a better alternative for objectively comparing the performances of machine learning algorithms. We compare the classification performance of a number of important and widely used machine learning algorithms, namely the Random Forests (RF), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and k-Nearest Neighbour (kNN). Using massively parallel processing on high-performance supercomputers, we compare the generalisation errors at various combinations of levels of several factors: number of features, training sample size, biological variation, experimental variation, effect size, replication and correlation between features. For smaller number of correlated features, number of features not exceeding approximately half the sample size, LDA was found to be the method of choice in terms of average generalisation errors as well as stability (precision) of error estimates. SVM (with RBF kernel) outperforms LDA as well as RF and kNN by a clear margin as the feature set gets larger provided the sample size is not too small (at least 20). The performance of kNN also improves as the number of features grows and outplays that of LDA and RF unless the data variability is too high and/or effect sizes are too small. RF was found to outperform only kNN in some instances where the data are more variable and have smaller effect sizes, in which cases it also provide more stable error estimates than kNN and LDA. Applications to a number of real datasets supported the findings from the simulation study. © The Author(s) 2013.
Bian, Liujiao; Ji, Xu; Hu, Wei
2014-07-01
In this work, a novel method was established to isolate and purify Human plasminogen Kringle 5 (HPK5) as a histidine-tagged fusion protein expressed in Escherichia coli BL21 (DE3). This method consisted of sample extraction using a Ni-chelated Sepharose Fast-Flow affinity column, ammonium sulfate salting-out and Sephadex G-75 size-exclusion column in turn. The purity analysis by SDS-PAGE, high-performance size-exclusion and reversed-phase chromatographies showed that the obtained recombinant fusion HPK5 was homogeneous and its purity was higher than 96%; the activity analysis by chorioallantoic membrane model of chicken embryos revealed that the purified recombinant HPK5 exhibited an obvious anti-angiogenic activity under the effective range of 5.0-25.0 µg/mL. Through this procedure, about 19 mg purified recombinant fusion HPK5 can be obtained from 1 L of original fermentation solution. Approximate 32% of the total recombinant fusion HPK5 can be captured and the total yield was approximately 11%. Copyright © 2013 John Wiley & Sons, Ltd.
Influence of deposition temperature on WTiN coatings tribological performance
NASA Astrophysics Data System (ADS)
Londoño-Menjura, R. F.; Ospina, R.; Escobar, D.; Quintero, J. H.; Olaya, J. J.; Mello, A.; Restrepo-Parra, E.
2018-01-01
WTiN films were grown on silicon and stainless-steel substrates using the DC magnetron sputtering technique. The substrate temperature was varied taking values of 100 °C, 200 °C, 300 °C, and 400 °C. X-ray diffraction analysis allowed us to identify a rock salt-type face centered cubic (FCC) structure, with a lattice parameter of approximately 4.2 nm, a relatively low microstrain (deformations at microscopy level, between 4.7% and 6.7%), and a crystallite size of a few nanometers (11.6 nm-31.5 nm). The C1s, N1s, O1s, Ti2p, W4s, W4p, W4d and W4f narrow spectra were obtained using X-ray photoelectron spectroscopy (XPS) and depending on the substrate temperature, the deconvoluted spectra presented different binding energies. Grain sizes and roughness (approximately 4 nm) of films were determined using atomic force microscopy. Scratch and pin on disc tests were conducted, showing better performance of the film grown at 200 °C. This sample exhibited a lower roughness, coefficient of friction, and wear rate.
The absolute magnitude distribution of cold classical Kuiper belt objects
NASA Astrophysics Data System (ADS)
Petit, Jean-Marc; Bannister, Michele T.; Alexandersen, Mike; Chen, Ying-Tung; Gladman, Brett; Gwyn, Stephen; Kavelaars, JJ; Volk, Kathryn
2016-10-01
We report measurements of the low inclination component of the main Kuiper Belt showing a size freqency distribution very steep for sizes larger than H_r ~ 6.5-7.0 and then a flattening to shallower slope that is still steeper than the collisional equilibrium slope.The Outer Solar System Origins Survey (OSSOS) is ongoing and is expected to detect over 500 TNOs in a precisely calibrated and characterized survey. Combining our current sample with CFEPS and the Alexandersen et al. (2015) survey, we analyse a sample of ~180 low inclination main classical (cold) TNOs, with absolute magnitude H_r (SDSS r' like flter) in the range 5 to 8.8. We confirm that the H_r distribution can be approximated by an exponential with a very steep slope (>1) at the bright end of the distribution, as has been recognized long ago. A transition to a shallower slope occurs around H_r ~ 6.5 - 7.0, an H_r mag identified by Fraster et al (2014). Faintward of this transition, we find a second exponential to be a good approximation at least until H_r ~ 8.5, but with a slope significantly steeper than the one proposed by Fraser et al. (2014) or even the collisional equilibrium value of 0.5.The transition in the cold TNO H_r distribution thus appears to occur at larger sizes than is observed in the high inclination main classical (hot) belt, an important indicator of a different cosmogony for these two sub-components of the main classical Kuiper belt. Given the largish slope faintward of the transition, the cold population with ~100 km diameter may dominate the mass of the Kuiper belt in the 40 AU < a < 47 au region.
Population genetics inference for longitudinally-sampled mutants under strong selection.
Lacerda, Miguel; Seoighe, Cathal
2014-11-01
Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.
On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems
Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...
2015-10-30
In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A; Robert, Christian P; Marin, Jean-Michel; Balding, David J; Guillemaud, Thomas; Estoup, Arnaud
2008-12-01
Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc.
Emma Vakili; Chad M. Hoffman; Robert E. Keane
2016-01-01
Fuel loading estimates from planar intersect sampling protocols for fine dead down woody surface fuels require an approximation of the mean squared diameter (d2) of 1-h (0-0.63 cm), 10-h (0.63-2.54 cm), and 100-h (2.54-7.62 cm) timelag size classes. The objective of this study is to determine d2 in ponderosa pine (Pinus ponderosa) forests of New Mexico and Colorado,...
Size-amplified acoustofluidic separation of circulating tumor cells with removable microbeads
NASA Astrophysics Data System (ADS)
Liu, Huiqin; Ao, Zheng; Cai, Bo; Shu, Xi; Chen, Keke; Rao, Lang; Luo, Changliang; Wang, Fu-Bin; Liu, Wei; Bondesson, Maria; Guo, Shishang; Guo, Feng
2018-06-01
Isolation and analysis of rare circulating tumor cells (CTCs) is of great interest in cancer diagnosis, prognosis, and treatment efficacy evaluation. Acoustofluidic cell separation becomes an attractive method due to its contactless, noninvasive, simple, and versatile features. However, the indistinctive physical difference between CTCs and normal blood cells limits the purity of CTCs using current acoustic methods. Herein, we demonstrate a size-amplified acoustic separation and release of CTCs with removable microbeads. CTCs selectively bound to size-amplifiers (40 μm-diameter anti-EpCAM/gelatin-coated SiO2 microbeads) have significant physical differences (size and mechanics) compared to normal blood cells, resulting in an amplification of acoustic radiation force approximately a hundredfold over that of bare CTCs or normal blood cells. Therefore, CTCs can be efficiently sorted out with size-amplifiers in a traveling surface acoustic wave microfluidic device and released from size-amplifiers by enzymatic degradation for further purification or downstream analysis. We demonstrate a cell separation from blood samples with a total efficiency (E total) of ∼ 77%, purity (P) of ∼ 96%, and viability (V) of ∼83% after releasing cells from size-amplifiers. Our method substantially improves the emerging application of rare cell purification for translational medicine.
Long term evolution of distant retrograde orbits in the Earth-Moon system
NASA Astrophysics Data System (ADS)
Bezrouk, Collin; Parker, Jeffrey S.
2017-09-01
This work studies the evolution of several Distant Retrograde Orbits (DROs) of varying size in the Earth-Moon system over durations up to tens of millennia. This analysis is relevant for missions requiring a completely hands off, long duration quarantine orbit, such as a Mars Sample Return mission or the Asteroid Redirect Mission. Four DROs are selected from four stable size regions and are propagated for up to 30,000 years with an integrator that uses extended precision arithmetic techniques and a high fidelity dynamical model. The evolution of the orbit's size, shape, orientation, period, out-of-plane amplitude, and Jacobi constant are tracked. It has been found that small DROs, with minor axis amplitudes of approximately 45,000 km or less decay in size and period largely due to the Moon's solid tides. Larger DROs (62,000 km and up) are more influenced by the gravity of bodies external to the Earth-Moon system, and remain bound to the Moon for significantly less time.
De Girolamo, A; Lippolis, V; Nordkvist, E; Visconti, A
2009-06-01
Fourier transform near-infrared spectroscopy (FT-NIR) was used for rapid and non-invasive analysis of deoxynivalenol (DON) in durum and common wheat. The relevance of using ground wheat samples with a homogeneous particle size distribution to minimize measurement variations and avoid DON segregation among particles of different sizes was established. Calibration models for durum wheat, common wheat and durum + common wheat samples, with particle size <500 microm, were obtained by using partial least squares (PLS) regression with an external validation technique. Values of root mean square error of prediction (RMSEP, 306-379 microg kg(-1)) were comparable and not too far from values of root mean square error of cross-validation (RMSECV, 470-555 microg kg(-1)). Coefficients of determination (r(2)) indicated an "approximate to good" level of prediction of the DON content by FT-NIR spectroscopy in the PLS calibration models (r(2) = 0.71-0.83), and a "good" discrimination between low and high DON contents in the PLS validation models (r(2) = 0.58-0.63). A "limited to good" practical utility of the models was ascertained by range error ratio (RER) values higher than 6. A qualitative model, based on 197 calibration samples, was developed to discriminate between blank and naturally contaminated wheat samples by setting a cut-off at 300 microg kg(-1) DON to separate the two classes. The model correctly classified 69% of the 65 validation samples with most misclassified samples (16 of 20) showing DON contamination levels quite close to the cut-off level. These findings suggest that FT-NIR analysis is suitable for the determination of DON in unprocessed wheat at levels far below the maximum permitted limits set by the European Commission.
Zhang, Jinsong; Wang, Huali; Bao, Yongping; Zhang, Lide
2004-05-28
We previous reported that a nano red elemental selenium (Nano-Se) in the range from 20 approximately 60 nm had similar bioavailability to sodium selenite (BioFactors 15 (2001) 27). We recently found that Nano-Se with different size had marked difference in scavenging an array of free radicals in vitro, the smaller the particle, the better scavenging activity (Free Radic. Biol. Med. 35 (2003) 805). In order to examine whether there is a size effect of Nano-Se in the induction of Se-dependent enzymes, a range of Nano-Se (5 approximately 200 nm) have been prepared based on the control of elemental Se atom aggregation. The sizes of Nano-Se particles were inversely correlated with protein levels in the redox system of selenite and glutathione. Different sizes of red elemental Se were prepared by adding varying amount of bovine serum albumin (BSA). Three different sizes of Nano-Se (5 approximately 15 nm, 20 approximately 60 nm, and 80 approximately 200 nm) have been chosen for the comparison of biological activity in terms of the induction of seleno-enzyme activities. Results showed that there was no significant size effect of Nano-Se from 5 to 200 nm in the induction of glutathione peroxidase (GPx), phospholipid hydroperoxide glutathione peroxidase (PHGPx) and thioredoxin reductase-1 (TrxR-1) in human hepatoma HepG2 cells and the livers of mice.
Petrology of the Crystalline Rocks Hosting the Santa Fe Impact Structure
NASA Technical Reports Server (NTRS)
Schrader, C. M.; Cohen, B. A.
2010-01-01
We collected samples from within the area of shatter cone occurrence and for approximately 8 kilometers (map distance) along the roadway. Our primary goal is to date the impact. Our secondary goal is to use the petrology and Ar systematics to provide further insight into size and scale of the impact. Our approach is to: Conduct a detailed petrology study to identify lithologies that share petrologic characteristics and tectonic histories but with differing degrees of shock. Obtain micro-cores of K-bearing minerals from multiple samples for Ar-40/Ar-39 analysis. Examine the Ar diffusion patterns for multiple minerals in multiple shocked and control samples. This will help us to better understand outcrop and regional scale relationships among rocks and their responses to the impact event.
Non-Born-Oppenheimer self-consistent field calculations with cubic scaling
NASA Astrophysics Data System (ADS)
Moncada, Félix; Posada, Edwin; Flores-Moreno, Roberto; Reyes, Andrés
2012-05-01
An efficient nuclear molecular orbital methodology is presented. This approach combines an auxiliary density functional theory for electrons (ADFT) and a localized Hartree product (LHP) representation for the nuclear wave function. A series of test calculations conducted on small molecules exposed that energy and geometry errors introduced by the use of ADFT and LHP approximations are small and comparable to those obtained by the use of electronic ADFT. In addition, sample calculations performed on (HF)n chains disclosed that the combined ADFT/LHP approach scales cubically with system size (n) as opposed to the quartic scaling of Hartree-Fock/LHP or DFT/LHP methods. Even for medium size molecules the improved scaling of the ADFT/LHP approach resulted in speedups of at least 5x with respect to Hartree-Fock/LHP calculations. The ADFT/LHP method opens up the possibility of studying nuclear quantum effects on large size systems that otherwise would be impractical.
Topological Hall and Spin Hall Effects in Disordered Skyrmionic Textures
NASA Astrophysics Data System (ADS)
Ndiaye, Papa Birame; Akosa, Collins; Manchon, Aurelien; Spintronics Theory Group Team
We carry out a throughout study of the topological Hall and topological spin Hall effects in disordered skyrmionic systems: the dimensionless (spin) Hall angles are evaluated across the energy band structure in the multiprobe Landauer-Büttiker formalism and their link to the effective magnetic field emerging from the real space topology of the spin texture is highlighted. We discuss these results for an optimal skyrmion size and for various sizes of the sample and found that the adiabatic approximation still holds for large skyrmions as well as for few atomic size-nanoskyrmions. Finally, we test the robustness of the topological signals against disorder strength and show that topological Hall effect is highly sensitive to momentum scattering. This work was supported by the King Abdullah University of Science and Technology (KAUST) through the Award No OSR-CRG URF/1/1693-01 from the Office of Sponsored Research (OSR).
Silvestri, Daniele; Wacławek, Stanisław; Gončuková, Zuzanna; Padil, Vinod V T; Grübel, Klaudiusz; Černík, Miroslav
2018-05-24
A novel method for assessing the disintegration degree (DD) of waste activated sludge (WAS) with the use of differential centrifugal sedimentation method (DCS) was shown herein. The method was validated for a WAS sample at four levels of disintegration in the range of 14.4-82.6% corresponding to the median particle size range of 8.5-1.6 µm. From the several sludge disintegration methods used (i.e. microwave, alkalization, ultrasounds and peroxydisulfate activated by ultrasounds), the activated peroxydisulfate disintegration resulted in the greatest DD 83% and the smallest median particle size of WAS. Particle size distribution of pretreated sludge, measured by DCS, was in a negative correlation with the DD, determined from soluble chemical oxygen demand (SCOD; determination coefficient of 0.995). Based on the obtained results, it may be concluded that the DCS analysis can approximate the WAS disintegration degree.
Closure and ratio correlation analysis of lunar chemical and grain size data
NASA Technical Reports Server (NTRS)
Butler, J. C.
1976-01-01
Major element and major element plus trace element analyses were selected from the lunar data base for Apollo 11, 12 and 15 basalt and regolith samples. Summary statistics for each of the six data sets were compiled, and the effects of closure on the Pearson product moment correlation coefficient were investigated using the Chayes and Kruskal approximation procedure. In general, there are two types of closure effects evident in these data sets: negative correlations of intermediate size which are solely the result of closure, and correlations of small absolute value which depart significantly from their expected closure correlations which are of intermediate size. It is shown that a positive closure correlation will arise only when the product of the coefficients of variation is very small (less than 0.01 for most data sets) and, in general, trace elements in the lunar data sets exhibit relatively large coefficients of variation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Suraj; Cunningham, Ross; Ozturk, Tugce
Aluminum alloys are candidate materials for weight critical applications because of their excellent strength and stiffness to weight ratio. However, defects such as voids decrease the strength and fatigue life of these alloys, which can limit the application of Selective Laser Melting. In this study, the average volume fraction, average size, and size distribution of pores in Al10-Si-1Mg samples built using Selective Laser Melting have been characterized. Synchrotron high energy X-rays were used to perform computed tomography on volumes of order one cubic millimeter with a resolution of approximately 1.5 μm. Substantial variations in the pore size distributions were foundmore » as a function of process conditions. Even under conditions that ensured that all locations were melted at least once, a significant number density was found of pores above 5 μm in diameter.« less
NASA Astrophysics Data System (ADS)
Pawcenis, Dominika; Smoleń, Mariusz; Aksamit-Koperska, Monika A.; Łojewski, Tomasz; Łojewska, Joanna
2016-06-01
Size exclusion chromatography (SEC), especially coupled with multiple angle laser light scattering detector (MALLS) is a powerful tool in diagnostics of deterioration of historic and art objects to evaluate their condition. In this paper, SEC-UV-MALLS-DRI technique was applied to study degradation of silk fibroin samples ( Bombyx mori) artificially aged under various conditions: in the presence of oxygen, in different amount of water vapour and in volatile organic products (VOCs), all at temperature of 90 °C. Conditions were chosen in such a way that it mimicked real conditions of textiles' storing during exhibitions and in show cases. The influence of temperature, moisture and VOCs content on the state of silk textiles was examined with the use of size exclusion chromatography. Pseudo-zero-order Ekenstam equation was applied to study degradation rates of fibroin with use of the approximated values of DP of fibroin.
NASA Astrophysics Data System (ADS)
Hiroi, T.; Kaiden, H.; Yamaguchi, A.; Kojima, H.; Uemoto, K.; Ohtake, M.; Arai, T.; Sasaki, S.
2016-12-01
Lunar meteorite chip samples recovered by the National Institute of Polar Research (NIPR) have been studied by a UV-visible-near-infrared spectrometer, targeting small areas of about 3 × 2 mm in size. Rock types and approximate mineral compositions of studied meteorites have been identified or obtained through this spectral survey with no sample preparation required. A linear deconvolution method was used to derive end-member mineral spectra from spectra of multiple clasts whenever possible. In addition, the modified Gaussian model was used in an attempt of deriving their major pyroxene compositions. This study demonstrates that a visible-near-infrared spectrometer on a lunar rover would be useful for identifying these kinds of unaltered (non-space-weathered) lunar rocks. In order to prepare for such a future mission, further studies which utilize a smaller spot size are desired for improving the accuracy of identifying the clasts and mineral phases of the rocks.
Yang, Jian; Zhang, David; Yang, Jing-Yu; Niu, Ben
2007-04-01
This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, Locality Preserving Projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications.
NASA Astrophysics Data System (ADS)
Dai, Jiawei; Pan, Yubai; Xie, Tengfei; Kou, Huamin; Li, Jiang
2018-04-01
Highly transparent terbium aluminum garnet (Tb3Al5O12, TAG) magneto-optical ceramics were fabricated from co-precipitated nanopowders with tetraethoxysilane (TEOS) as sintering aid by vacuum sintering combined with hot isostatic pressing (HIP) post-treatment. The ball milled TAG powder shows better dispersity than the as-synthesized powder, and its average particle size is about 80 nm. For the ceramic sample pre-sintered at 1720 °C for 20 h with HIP post-treated at 1700 °C for 3 h, the in-line transmittance exceeds 76% in the region of 400-1580nm (except the absorption band), reaching a maximum value of 81.8% at the wavelength of 1390 nm. The microstructure of the TAG ceramic is homogeneous and its average grain size is approximately 19.7 μm. The Verdet constant of the sample is calculated to be -182.7 rad·T-1·m-1 at room temperature.
Phase imaging using highly coherent X-rays: radiography, tomography, diffraction topography.
Baruchel, J; Cloetens, P; Härtwig, J; Ludwig, W; Mancini, L; Pernot, P; Schlenker, M
2000-05-01
Several hard X-rays imaging techniques greatly benefit from the coherence of the beams delivered by the modern synchrotron radiation sources. This is illustrated with examples recorded on the 'long' (145 m) ID19 'imaging' beamline of the ESRF. Phase imaging is directly related to the small angular size of the source as seen from one point of the sample ('effective divergence' approximately microradians). When using the ;propagation' technique, phase radiography and tomography are instrumentally very simple. They are often used in the 'edge detection' regime, where the jumps of density are clearly observed. The in situ damage assessment of micro-heterogeneous materials is one example of the many applications. Recently a more quantitative approach has been developed, which provides a three-dimensional density mapping of the sample ('holotomography'). The combination of diffraction topography and phase-contrast imaging constitutes a powerful tool. The observation of holes of discrete sizes in quasicrystals, and the investigation of poled ferroelectric materials, result from this combination.
Image simulation for electron energy loss spectroscopy
Oxley, Mark P.; Pennycook, Stephen J.
2007-10-22
In this paper, aberration correction of the probe forming optics of the scanning transmission electron microscope has allowed the probe-forming aperture to be increased in size, resulting in probes of the order of 1 Å in diameter. The next generation of correctors promise even smaller probes. Improved spectrometer optics also offers the possibility of larger electron energy loss spectrometry detectors. The localization of images based on core-loss electron energy loss spectroscopy is examined as function of both probe-forming aperture and detector size. The effective ionization is nonlocal in nature, and two common local approximations are compared to full nonlocal calculations.more » Finally, the affect of the channelling of the electron probe within the sample is also discussed.« less
Mn valence, magnetic, and electrical properties of LaMnO3+δ nanofibers by electrospinning.
Zhou, Xianfeng; Xue, Jiang; Zhou, Defeng; Wang, Zhongli; Bai, Yijia; Wu, Xiaojie; Liu, Xiaojuan; Meng, Jian
2010-10-01
LaMnO3+δ nanofibers have been prepared by electrospinning. The nearly 70% of Mn atoms is Mn4+, which is much higher than that in the nanoparticles. The average grain size of our fibers is approximately 20 nm, which is the critical size producing the nanoscale effect. The nanofibers exhibit a very broad magnetic transition with Tc≈255 K, and the Tc onset is around 310 K. The blocking temperature TB is 180 K. The sample shows weak ferromagnetic property above the TB and below Tc and superparamagnetic property near the Tc onset. The resistivity measurements show a metal-insulator transition near 210 K and an upturn at about 45 K.
NASA Technical Reports Server (NTRS)
Fechtig, H.; Gentner, W.; Hartung, J. B.; Nagel, K.; Neukum, G.; Schneider, E.; Storzer, D.
1977-01-01
The lunar microcrater phenomenology is described. The morphology of the lunar craters is in almost all aspects simulated in laboratory experiments in the diameter range from less than 1 nu to several millimeters and up to 60 km/s impact velocity. An empirically derived formula is given for the conversion of crater diameters into projectile diameters and masses for given impact velocities and projectile and target densities. The production size frequency distribution for lunar craters in the crater size range from approximately 1 nu to several millimeters in diameter is derived from various microcrater measurements within a factor of up to 5. Particle track exposure age measurements for a variety of lunar samples have been performed. They allow the conversion of the lunar crater size frequency production distributions into particle fluxes. The development of crater populations on lunar rocks under self-destruction by subsequent meteoroid impacts and crater overlap is discussed and theoretically described. Erosion rates on lunar rocks on the order of several millimeters per 10 yr are calculated. Chemical investigations of the glass linings of lunar craters yield clear evidence of admixture of projectile material only in one case, where the remnants of an iron-nickel micrometeorite have been identified.
Assessment of the influence of field size on maize gene flow using SSR analysis.
Palaudelmàs, M; Melé, E; Monfort, A; Serra, J; Salvia, J; Messeguer, J
2012-06-01
One of the factors that may influence the rate of cross-fertilization is the relative size of the pollen donor and receptor fields. We designed a spatial distribution with four varieties of genetically-modified (GM) yellow maize to generate different sized fields while maintaining a constant distance to neighbouring fields of conventional white kernel maize. Samples of cross-fertilized, yellow kernels in white cobs were collected from all of the adjacent fields at different distances. A special series of samples was collected at distances of 0, 2, 5, 10, 20, 40, 80 and 120 m following a transect traced in the dominant down-wind direction in order to identify the origin of the pollen through SSR analysis. The size of the receptor fields should be taken into account, especially when they extend in the same direction than the GM pollen flow is coming. From collected data, we then validated a function that takes into account the gene flow found in the field border and that is very useful for estimating the % of GM that can be found in any point of the field. It also serves to predict the total GM content of the field due to cross fertilization. Using SSR analysis to identify the origin of pollen showed that while changes in the size of the donor field clearly influence the percentage of GMO detected, this effect is moderate. This study demonstrates that doubling the donor field size resulted in an approximate increase of GM content in the receptor field of 7%. This indicates that variations in the size of the donor field have a smaller influence on GM content than variations in the size of the receptor field.
Structure and texture analysis of PVC foils by neutron diffraction.
Kalvoda, L; Dlouhá, M; Vratislav, S
2010-01-01
Crystalline order of molded and then bi-axially stretched foils prepared from atactic PVC resin is investigated by means of wide-angle neutron diffraction (WAND). The observed high-resolution WAND patterns of all samples are dominated by a sharp maximum corresponding to the inter-planar distance 0.52 nm. Two weaker maxima are also resolved at 0.62 and 0.78 nm. Intensities of the peaks vary with deformation ratios of the samples and their diffraction position. Average size of the coherently scattering domains is estimated as approximately 4-8 nm. Based on the experimental data, a novel model of crystalline order of atactic PVC is proposed. Copyright 2009 Elsevier Ltd. All rights reserved.
Examination of the Chayes-Kruskal procedure for testing correlations between proportions
Kork, J.O.
1977-01-01
The Chayes-Kruskal procedure for testing correlations between proportions uses a linear approximation to the actual closure transformation to provide a null value, pij, against which an observed closed correlation coefficient, rij, can be tested. It has been suggested that a significant difference between pij and rij would indicate a nonzero covariance relationship between the ith and jth open variables. In this paper, the linear approximation to the closure transformation is described in terms of a matrix equation. Examination of the solution set of this equation shows that estimation of, or even the identification of, significant nonzero open correlations is essentially impossible even if the number of variables and the sample size are large. The method of solving the matrix equation is described in the appendix. ?? 1977 Plenum Publishing Corporation.
Preliminary Results from VOC measurements in the Lower Fraser Valley in July/Aug 2012
NASA Astrophysics Data System (ADS)
Schiller, C. L.; Jones, K.; Vingarzan, R.; Leaitch, R.; Macdonald, A.; Osthoff, H. D.; Reid, K.
2012-12-01
In July/August 2012, a pilot study looking at the effect of ClNO2 production on the ozone concentrations in the lower Fraser valley near Abbotsford, BC was conducted. The lower Fraser valley in British Columbia Canada has some of the highest ozone concentrations and visibility issues in Canada. Abbotsford is located approximately 80 kms east of Vancouver, BC and approximately 30 kms from the ocean. The site was located in a largely agricultural area with fruit farms (raspberries and blueberries) and poultry barns predominating. During the study biogenic and anthropogenic VOCs were measured in situ using a GCMS/FID with hourly samples. Particle composition was measured using an ACSM and size distribution using an SMPS. Preliminary results from the study will be discussed.
Power of tests for comparing trend curves with application to national immunization survey (NIS).
Zhao, Zhen
2011-02-28
To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.
Soil carbon inventories under a bioenergy crop (switchgrass): Measurement limitations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garten, C.T. Jr.; Wullschleger, S.D.
Approximately 5 yr after planting, coarse root carbon (C) and soil organic C (SOC) inventories were compared under different types of plant cover at four switchgrass (Panicum virgatum L.) production field trials in the southeastern USA. There was significantly more coarse root C under switchgrass (Alamo variety) and forest cover than tall fescue (Festuca arundinacea Schreb.), corn (Zea mays L.), or native pastures of mixed grasses. Inventories of SOC under switchgrass were not significantly greater than SOC inventories under other plant covers. At some locations the statistical power associated with ANOVA of SOC inventories was low, which raised questions aboutmore » whether differences in SOC could be detected statistically. A minimum detectable difference (MDD) for SOC inventories was calculated. The MDD is the smallest detectable difference between treatment means once the variation, significance level, statistical power, and sample size are specified. The analysis indicated that a difference of {approx}50 mg SOC/cm{sup 2} or 5 Mg SOC/ha, which is {approx}10 to 15% of existing SOC, could be detected with reasonable sample sizes and good statistical power. The smallest difference in SOC inventories that can be detected, and only with exceedingly large sample sizes, is {approx}2 to 3%. These measurement limitations have implications for monitoring and verification of proposals to ameliorate increasing global atmospheric CO{sub 2} concentrations by sequestering C in soils.« less
Shrimpton, J.M.; Zydlewski, Joseph D.; Heath, J.W.
2007-01-01
We examined the effect of temperature oscillation and increased suspended sediment concentration on growth and smolting in juvenile ocean-type chinook salmon (Oncorhynchus tshawytscha). Fish were ponded on February 26; each treatment group had three replicates of 250 fish. Mean temperatures for the entire experiment were 12.3????C for all tanks with a total of 1348 and 1341 degree days for the constant temperature and oscillating temperature tanks, respectively. Daily fluctuation in temperature averaged 7.5????C in the variable temperature groups and less than 1????C for the constant temperature group. Starting on April 5, bentonite clay was added each day to tanks as a pulse event to achieve a suspended sediment concentration of 200??mg l- 1; clay cleared from the tanks within approximately 8??h. Fish were sampled at approximately two??week intervals from ponding until mid-June. On the last sample date, June 12, a single gill arch was removed and fixed for histological examination of gill morphology. By early May, significant differences were seen in size between the groups; control > temperature = sediment > (temperature ?? sediment). This relationship was consistent throughout the experiment except for the last sample date when the temperature group had a mean weight significantly greater than the sediment group. Gill Na+,K+-ATPase activity was not affected by daily temperature oscillations, but groups subjected to increased suspended sediment had significantly lower enzyme activities compared to controls. Mean cell size for gill chloride cells did not differ between groups. Plasma cortisol increased significantly during the spring, but there were no significant differences between groups. ?? 2007 Elsevier B.V. All rights reserved.
Is a data set distributed as a power law? A test, with application to gamma-ray burst brightnesses
NASA Technical Reports Server (NTRS)
Wijers, Ralph A. M. J.; Lubin, Lori M.
1994-01-01
We present a method to determine whether an observed sample of data is drawn from a parent distribution that is pure power law. The method starts from a class of statistics which have zero expectation value under the null hypothesis, H(sub 0), that the distribution is a pure power law: F(x) varies as x(exp -alpha). We study one simple member of the class, named the `bending statistic' B, in detail. It is most effective for detection a type of deviation from a power law where the power-law slope varies slowly and monotonically as a function of x. Our estimator of B has a distribution under H(sub 0) that depends only on the size of the sample, not on the parameters of the parent population, and is approximated well by a normal distribution even for modest sample sizes. The bending statistic can therefore be used to test a set of numbers is drawn from any power-law parent population. Since many measurable quantities in astrophysics have distriibutions that are approximately power laws, and since deviations from the ideal power law often provide interesting information about the object of study (e.g., a `bend' or `break' in a luminosity function, a line in an X- or gamma-ray spectrum), we believe that a test of this type will be useful in many different contexts. In the present paper, we apply our test to various subsamples of gamma-ray burst brightness from the first-year Burst and Transient Source Experiment (BATSE) catalog and show that we can only marginally detect the expected steepening of the log (N (greater than C(sub max))) - log (C(sub max)) distribution.
Multi-locus analysis of genomic time series data from experimental evolution.
Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S
2015-04-01
Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.
Chan, Yvonne L; Schanzenbach, David; Hickerson, Michael J
2014-09-01
Methods that integrate population-level sampling from multiple taxa into a single community-level analysis are an essential addition to the comparative phylogeographic toolkit. Detecting how species within communities have demographically tracked each other in space and time is important for understanding the effects of future climate and landscape changes and the resulting acceleration of extinctions, biological invasions, and potential surges in adaptive evolution. Here, we present a statistical framework for such an analysis based on hierarchical approximate Bayesian computation (hABC) with the goal of detecting concerted demographic histories across an ecological assemblage. Our method combines population genetic data sets from multiple taxa into a single analysis to estimate: 1) the proportion of a community sample that demographically expanded in a temporally clustered pulse and 2) when the pulse occurred. To validate the accuracy and utility of this new approach, we use simulation cross-validation experiments and subsequently analyze an empirical data set of 32 avian populations from Australia that are hypothesized to have expanded from smaller refugia populations in the late Pleistocene. The method can accommodate data set heterogeneity such as variability in effective population size, mutation rates, and sample sizes across species and exploits the statistical strength from the simultaneous analysis of multiple species. This hABC framework used in a multitaxa demographic context can increase our understanding of the impact of historical climate change by determining what proportion of the community responded in concert or independently and can be used with a wide variety of comparative phylogeographic data sets as biota-wide DNA barcoding data sets accumulate. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
NASA Astrophysics Data System (ADS)
Tsuji, T.; Nishizaka, N.; Onishi, K.
2017-12-01
Sedimentation processes during explosive volcanic eruptions can be constrained based on detailed analysis of grain-size variation of tephra deposits. Especially, an accurate description of the amount of fine particles has also significant implications for the assessment of specific tephra hazards. Grain size studies for single short-term eruption has advantage to contribute understanding the sedimentation processes because it is simple compared to long-lasting eruption. The 2016 Aso Nakadake eruption, Japan represents an ideal for the study of short-term eruptions thanks to an accurate investigation. Then, we investigate the grain size variation with distance from the vent and sedimentological features of the deposit to discuss the sedimentation processes of the tephra fragments. The eruption provided pyroclastic flow deposit and fallout tephra which distributed NE to ENE direction from the vent. The deposits between 4 and 20 km from vent consist of fine-coated lapilli to coarse ash, ash pellet and mud droplet in ascending degree. The samples are lapilli-bearing within 20 km from vent and those outside of 20 km mainly consist of ash particles. Detailed analyses of individual samples highlight a rapid decay of maximum and mean grain size for the deposit from proximal to distal. The decay trend of maximum grain-size is approximated by three segments of exponential curves with two breaks-in-slope at 10 and 40 km from vent. Most of the sampled deposits are characterized by bimodal grain-size distributions, with the modes of the coarse subpopulation decreasing with distance from vent and those of the fine subpopulation being mostly stable. The fine subpopulation has been interpreted as being mostly associated with size-selective sedimentation processes (e.g., particle aggregation) confirmed by the existence of fine-coated particles, ash pellet and mud droplet. As the fine-coated particles generally have a higher terminal velocity than the individual constituent particles, those could be related with the rapid decrease of maximum grain-size with distance from vent at proximal area. Further detail grain-size analyses and theoretical studies can be contributed to understand the effect of fine ash aggregation on sedimentation processes quantitatively.
Design and simulation study of the immunization Data Quality Audit (DQA).
Woodard, Stacy; Archer, Linda; Zell, Elizabeth; Ronveaux, Olivier; Birmingham, Maureen
2007-08-01
The goal of the Data Quality Audit (DQA) is to assess whether the Global Alliance for Vaccines and Immunization-funded countries are adequately reporting the number of diphtheria-tetanus-pertussis immunizations given, on which the "shares" are awarded. Given that this sampling design is a modified two-stage cluster sample (modified because a stratified, rather than a simple, random sample of health facilities is obtained from the selected clusters); the formula for the calculation of the standard error for the estimate is unknown. An approximated standard error has been proposed, and the first goal of this simulation is to assess the accuracy of the standard error. Results from the simulations based on hypothetical populations were found not to be representative of the actual DQAs that were conducted. Additional simulations were then conducted on the actual DQA data to better access the precision of the DQ with both the original and the increased sample sizes.
Buenfil-Rojas, A M; Álvarez-Legorreta, T; Cedeño-Vázquez, J R
2015-02-01
The aim of this study was to determine concentrations of heavy metals (cadmium [Cd] and mercury [Hg]) and metallothioneins (MTs) in blood plasma and caudal scutes of Morelet's crocodile (Crocodylus moreletii) from Rio Hondo, a river and natural border between Mexico and Belize. Three transects of the river (approximately 20 km each) were surveyed in September 2012 and April 2013, and samples were collected from 24 crocodiles from these areas. In blood plasma, Cd (7.6 ± 9.6 ng/ml) was detected in 69 % of samples (n = 9); Hg (12.2 ± 9.2 ng/ml) was detected in 46 % of samples (n = 6); and MTs (10,900 ± 9,400 ng/ml) were detected in 92 % of samples (n = 12). In caudal scutes samples, Cd (31.7 ± 39.4 ng/g) was detected in 84 % of samples (n = 12) and Hg (374.1 ± 429.4 ng/g) in 83 % of samples (n = 20). No MTs were detected in caudal scutes. Hg concentrations in scutes from the Rio Hondo were 2- to 5-fold greater than those previously reported in scutes from other localities in northern Belize. In blood plasma, a significant positive relationship between Hg and body size was observed. Mean concentrations of Cd and MTs in size classes suggest that MTs may be related to Cd exposure. This is the first report of MT presence in crocodile blood.
[Outpatient physiotherapy for patients with knee pain in northern Germany].
Karstens, S; Froböse, I; Weiler, S W
2014-10-01
Physiotherapy, in comparison to occupational therapy and speech therapy, is the most frequently prescribed treatment in Germany. Nationally there is only scarce information available on indications, treatment approaches and development of health condition throughout therapy. Present work encompasses only small sample sizes and insufficient follow-up periods. To describe, how patients with knee complaints are treated based on a physiotherapy referral and how their health condition changes during therapy. A descriptive-exploratory secondary analysis of data from a prospective multicentre observational study was conducted. The Bother index of the Musculoskeletal Function Assessment Questionnaire (German, 16-Item version) and an 11-step box scale on pain intensity were utilised as outcome criteria. Data of 160 patients, age 47.4 (SD 10.8), 51% female, approximately 51% with previous arthroscopy, were analysed. The response rate 6 months after therapy approximately amounted to 50%. Main therapy approaches were strengthening, stretching and manual therapy, combined with home exercises. Impairment in daily life as well as pain improved substantially during therapy (SES range 0.6-1.75). The conducted study brought up insights for physiotherapy health service for pa-tients with cartilage or meniscal problems, based on a sample size not accessible for Germany before. After reviewing the international literature it may be assumed, that the clearly active character of the therapy and the combination of different treatment approaches are of relevance for the presented reduction of impairment in daily life. © Georg Thieme Verlag KG Stuttgart · New York.
Hierarchical complexity and the size limits of life.
Heim, Noel A; Payne, Jonathan L; Finnegan, Seth; Knope, Matthew L; Kowalewski, Michał; Lyons, S Kathleen; McShea, Daniel W; Novack-Gottshall, Philip M; Smith, Felisa A; Wang, Steve C
2017-06-28
Over the past 3.8 billion years, the maximum size of life has increased by approximately 18 orders of magnitude. Much of this increase is associated with two major evolutionary innovations: the evolution of eukaryotes from prokaryotic cells approximately 1.9 billion years ago (Ga), and multicellular life diversifying from unicellular ancestors approximately 0.6 Ga. However, the quantitative relationship between organismal size and structural complexity remains poorly documented. We assessed this relationship using a comprehensive dataset that includes organismal size and level of biological complexity for 11 172 extant genera. We find that the distributions of sizes within complexity levels are unimodal, whereas the aggregate distribution is multimodal. Moreover, both the mean size and the range of size occupied increases with each additional level of complexity. Increases in size range are non-symmetric: the maximum organismal size increases more than the minimum. The majority of the observed increase in organismal size over the history of life on the Earth is accounted for by two discrete jumps in complexity rather than evolutionary trends within levels of complexity. Our results provide quantitative support for an evolutionary expansion away from a minimal size constraint and suggest a fundamental rescaling of the constraints on minimal and maximal size as biological complexity increases. © 2017 The Author(s).
Athrey, Giridhar; Barr, Kelly R; Lance, Richard F; Leberg, Paul L
2012-01-01
Anthropogenic alterations in the natural environment can be a potent evolutionary force. For species that have specific habitat requirements, habitat loss can result in substantial genetic effects, potentially impeding future adaptability and evolution. The endangered black-capped vireo (Vireo atricapilla) suffered a substantial contraction of breeding habitat and population size during much of the 20th century. In a previous study, we reported significant differentiation between remnant populations, but failed to recover a strong genetic signal of bottlenecks. In this study, we used a combination of historical and contemporary sampling from Oklahoma and Texas to (i) determine whether population structure and genetic diversity have changed over time and (ii) evaluate alternate demographic hypotheses using approximate Bayesian computation (ABC). We found lower genetic diversity and increased differentiation in contemporary samples compared to historical samples, indicating nontrivial impacts of fragmentation. ABC analysis suggests a bottleneck having occurred in the early part of the 20th century, resulting in a magnitude decline in effective population size. Genetic monitoring with temporally spaced samples, such as used in this study, can be highly informative for assessing the genetic impacts of anthropogenic fragmentation on threatened or endangered species, as well as revealing the dynamics of small populations over time. PMID:23028396
Simulation of possible regolith optical alteration effects on carbonaceous chondrite meteorites
NASA Technical Reports Server (NTRS)
Clark, Beth E.; Fanale, Fraser P.; Robinson, Mark S.
1993-01-01
As the spectral reflectance search continues for links between meteorites and their parent body asteroids, the effects of optical surface alteration processes need to be considered. We present the results of an experimental simulation of the melting and recrystallization that occurs to a carbonaceous chondrite meteorite regolith powder upon heating. As done for the ordinary chondrite meteorites, we show the effects of possible parent-body regolith alteration processes on reflectance spectra of carbonaceous chondrites (CC's). For this study, six CC's of different mineralogical classes were obtained from the Antarctic Meteorite Collection: two CM meteorites, two CO meteorites, one CK, and one CV. Each sample was ground with a ceramic mortar and pestle to powders with maximum grain sizes of 180 and 90 microns. The reflectance spectra of these powders were measured at RELAB (Brown University) from 0.3 to 2.5 microns. Following comminution, the 90 micron grain size was melted in a nitrogen controlled-atmosphere fusion furnace at an approximate temperature of 1700 C. The fused sample was immediately held above a flow of nitrogen at 0 C for quenching. Following melting and recrystallization, the samples were reground to powders, and the reflectance spectra were remeasured. The effects on spectral reflectance for a sample of the CM carbonaceous chondrite called Murchison are shown.
Reduction Behavior of Assmang and Comilog ore in the SiMn Process
NASA Astrophysics Data System (ADS)
Kim, Pyunghwa Peace; Holtan, Joakim; Tangstad, Merete
The reduction behavior of raw materials from Assmang and Comilog based charges were experimentally investigated with CO gas up to 1600 °C. Quartz, HC FeMn slag or limestone were added to Assmang or Comilog according to the SiMn production charge, and mass loss results were obtained by using a TGA furnace. The results showed that particle size, type of manganese ore and mixture have close relationship to the reduction behavior of raw materials during MnO and SiO2 reduction. The influence of particle size to mass loss was apparent when Assmang or Comilog was mixed with only coke (FeMn) while it became insignificant when quartz and HC FeMn slag (SiMn) were added. This implied that quartz and HC FeMn slag had favored the incipient slag formation regardless of particle size. This explained the similar mass loss tendencies of SiMn charge samples between 1200-1500 °C, contrary to FeMn charge samples where different particle sizes showed significant difference in mass loss. Also, while FeMn charge samples showed progressive mass loss, SiMn charge samples showed diminutive mass loss until 1500 °C. However, rapid mass losses were observed with SiMn charge samples in this study above 1500 °C, and they have occurred at different temperatures. This implied rapid reduction of MnO and SiO2 and the type of ore and addition of HC FeMn slag have significant influence determining these temperatures. The temperatures observed for the rapid mass loss were approximately 1503 °C (Quartz and HC FeMn slag addition in Assmang), 1543 °C (Quartz addition in Assmang) and 1580-1587 °C (Quartz and limestone addition in Comilog), respectively. These temperatures also showed indications of possible SiMn production at process temperatures lower than 1550 °C.
A universal approximation of grain size from images of noncohesive sediment
NASA Astrophysics Data System (ADS)
Buscombe, D.; Rubin, D. M.; Warrick, J. A.
2010-06-01
The two-dimensional spectral decomposition of an image of sediment provides a direct statistical estimate, grid-by-number style, of the mean of all intermediate axes of all single particles within the image. We develop and test this new method which, unlike existing techniques, requires neither image processing algorithms for detection and measurement of individual grains, nor calibration. The only information required of the operator is the spatial resolution of the image. The method is tested with images of bed sediment from nine different sedimentary environments (five beaches, three rivers, and one continental shelf), across the range 0.1 mm to 150 mm, taken in air and underwater. Each population was photographed using a different camera and lighting conditions. We term it a "universal approximation" because it has produced accurate estimates for all populations we have tested it with, without calibration. We use three approaches (theory, computational experiments, and physical experiments) to both understand and explore the sensitivities and limits of this new method. Based on 443 samples, the root-mean-squared (RMS) error between size estimates from the new method and known mean grain size (obtained from point counts on the image) was found to be ±≈16%, with a 95% probability of estimates within ±31% of the true mean grain size (measured in a linear scale). The RMS error reduces to ≈11%, with a 95% probability of estimates within ±20% of the true mean grain size if point counts from a few images are used to correct bias for a specific population of sediment images. It thus appears it is transferable between sedimentary populations with different grain size, but factors such as particle shape and packing may introduce bias which may need to be calibrated for. For the first time, an attempt has been made to mathematically relate the spatial distribution of pixel intensity within the image of sediment to the grain size.
NASA Astrophysics Data System (ADS)
Miura, H.; Kobayashi, T.; Kobayashi, M.
2014-08-01
Cu-18.2Zn-1.5Si-0.25Fe (mass%) alloy was heavily cold rolled. Ultrafine grained (UFGed) structure, containing a mixture of lamellar and mechanical twins, was easily and homogeneously formed. The average grain size was approximately 100 nm. The as-rolled sample showed quite high ultimate tensile strength (UTS) over 1 GPa. The UTS was higher than those obtained by multi directional forging. When the samples were annealed at relatively low temperatures between 553 K and 653 K, they showed slight hardening followed by large softening due to occurrence of static recrystallization (SRX). Annealing of UFGed structure at relatively low temperature of around 0.4 Tm caused extensive SRX that, in turn, induces ultrafine RXed grained structure. The grain size of the RXed sample was as fine as 200 nm. Although the annealing induced recovery of ductility while UTS gradually reduces, UTS over 1 GPa with ductility of 15 % were attained. The RXed grains mainly contained ultrafine annealing twins. Therefore, UFGed structure and superior mechanical properties could be achieved by a simple process of cold rolling, i.e., without severe plastic deformation.
Chen, Yumin; Fritz, Ronald D; Kock, Lindsay; Garg, Dinesh; Davis, R Mark; Kasturi, Prabhakar
2018-02-01
A step-wise, 'test-all-positive-gluten' analytical methodology has been developed and verified to assess kernel-based gluten contamination (i.e., wheat, barley and rye kernels) during gluten-free (GF) oat production. It targets GF claim compliance at the serving-size level (of a pouch or approximately 40-50g). Oat groats are collected from GF oat production following a robust attribute-based sampling plan then split into 75-g subsamples, and ground. R-Biopharm R5 sandwich ELISA R7001 is used for analysis of all the first15-g portions of the ground sample. A >20-ppm result disqualifies the production lot, while a >5 to <20-ppm result triggers complete analysis of the remaining 60-g of ground sample, analyzed in 15-g portions. If all five 15-g test results are <20ppm, and their average is <10.67ppm (since a 20-ppm contaminant in 40g of oats would dilute to 10.67ppm in 75-g), the lot is passed. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Shanmuga Doss, Sreeja; Bhatt, Nirav Pravinbhai; Jayaraman, Guhan
2017-08-15
There is an unreasonably high variation in the literature reports on molecular weight of hyaluronic acid (HA) estimated using conventional size exclusion chromatography (SEC). This variation is most likely due to errors in estimation. Working with commercially available HA molecular weight standards, this work examines the extent of error in molecular weight estimation due to two factors: use of non-HA based calibration and concentration of sample injected into the SEC column. We develop a multivariate regression correlation to correct for concentration effect. Our analysis showed that, SEC calibration based on non-HA standards like polyethylene oxide and pullulan led to approximately 2 and 10 times overestimation, respectively, when compared to HA-based calibration. Further, we found that injected sample concentration has an effect on molecular weight estimation. Even at 1g/l injected sample concentration, HA molecular weight standards of 0.7 and 1.64MDa showed appreciable underestimation of 11-24%. The multivariate correlation developed was found to reduce error in estimations at 1g/l to <4%. The correlation was also successfully applied to accurately estimate the molecular weight of HA produced by a recombinant Lactococcus lactis fermentation. Copyright © 2017 Elsevier B.V. All rights reserved.
Organic solvent desorption from two tegafur polymorphs.
Bobrovs, Raitis; Actiņš, Andris
2013-11-30
Desorption behavior of 8 different solvents from α and β tegafur (5-fluoro-1-(tetrahydro-2-furyl)uracil) has been studied in this work. Solvent desorption from samples stored at 95% and 50% relative solvent vapor pressure was studied in isothermal conditions at 30 °C. The results of this study demonstrated that: solvent desorption rate did not differ significantly for both phases; solvent desorption in all cases occurred faster from samples with the largest particle size; and solvent desorption in most cases occurred in two steps. Structure differences and their surface properties were not of great importance on the solvent desorption rates because the main factor affecting desorption rate was sample particle size and sample morphology. Inspection of the structure packing showed that solvent desorption rate and amount of solvent adsorbed were mainly affected by surface molecule arrangement and ability to form short contacts between solvent molecule electron donor groups and freely accessible tegafur tetrahydrofuran group hydrogens, as well as between solvents molecule proton donor groups and fluorouracil ring carbonyl and fluoro groups. Solvent desorption rates of acetone, acetonitrile, ethyl acetate and tetrahydrofuran multilayers from α and β tegafur were approximately 30 times higher than those of solvent monolayers. Scanning electron micrographs showed that sample storage in solvent vapor atmosphere promotes small tegafur particles recrystallization to larger particles. Copyright © 2013 Elsevier B.V. All rights reserved.
VLBI observations of the nucleus of Centaurus A
NASA Technical Reports Server (NTRS)
Preston, R. A.; Wehrle, A. E.; Morabito, D. D.; Jauncey, D. L.; Batty, M. J.; Haynes, R. F.; Wright, A. E.; Nicolson, G. D.
1983-01-01
VLBI observations of the nucleus of Centaurus A made at 2.3 GHz on baselines with minimum fringe spacings of 0.15 and 0.0027 arcsec are presented. Results show that the nuclear component is elongated with a maximum extent of approximately 0.05 arcsec which is equivalent to a size of approximately 1 pc at the 5 Mpc distance of Centaurus A. The position angle of the nucleus is found to be 30 + or - 20 degrees, while the ratio of nuclear jet length to width is less than or approximately equal to 20. The nuclear flux density is determined to be 6.8 Jy, while no core component is found with an extent less than or approximately equal to 0.001 (less than or approximately equal to 0.02 pc) with a flux density of greater than or approximately equal to 20 mJy. A model of the Centaurus A nucleus composed of at least two components is developed on the basis of these results in conjunction with earlier VLBI and spectral data. The first component is an elongated source of approximately 0.05 arcsec (approximately 1 pc) size which contains most of the 2.3 GHz nuclear flux, while the second component is a source of approximately 0.0005 arcsec (approximately 0.01 pc) size which is nearly completely self-absorbed at 2.3 GHz but strengthens at higher frequencies.
Effects of salt loading and flow blockage on the WIPP shrouded probe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandra, S.; Ortiz, C.A.; McFarland, A.R.
1993-08-01
The shrouded probes at the WIPP site operate in a salt aerosol environment that can cause a buildup of salt deposits on exposed surfaces of the probes that, in turn, could produce changes in the sampling performance of the probes. At Station A, three probes had been operated for a period of approximately 2 1/2 years when they were inspected with a remote television camera. There were visible deposits of unknown thickness on the probes, so WIPP removed the probes for inspection and cleanup. Measurements were made on the probes and they showed the buildups to be approximately 2.5 mmmore » thick on the most critical dimension of a shrouded probe, which is the inside diameter of the inner probe. For reference, the diameter of a clean probe is 30 mm. The sampling performance of this particular shrouded probe had been previously evaluated in a wind tunnel at Aerosol Technology Laboratory (ATL) of Texas A&M University for two free stream velocities (14 and 21 m/s) and three particle sizes (5, 10 and 15 {mu}m AED).« less
Laser Time-of-Flight Mass Spectrometry for Space
NASA Technical Reports Server (NTRS)
Brinckerhoff, W. B.; Managadze, G. G.; McEntire, R. W.; Cheng, A. F.; Green, W. J.
2000-01-01
A miniature reflection time-of-flight mass spectrometer for in situ planetary surface analysis is described. The laser ablation mass spectrometer (LAMS) measures the regolith's elemental and isotopic composition without high-voltage source extraction or sample preparation. The compact size (< 2 x 10(exp 3) cubic cm) and low mass (approximately 2 kg) of LAMS, due to its fully coaxial design and two-stage reflectron, fall within the strict resource limitations of landed science missions to solar system bodies. A short-pulse laser focused to a spot with a diameter approximately 30-50 micrometers is used to obtain microscopic surface samples. Assisted by a microimager, LAMS can interactively select and analyze a range of compositional regions (with lateral motion) and with repeated pulses can access unweathered, subsurface materials. The mass resolution is calibrated to distinguish isotopic peaks at unit masses, and detection limits are on resolved to a few ppm. The design and calibration method of a prototype LAMS device is described, which include the development of preliminary relative sensitivity coefficients for major element bulk abundance measurements.
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
Srivastava, Arun; Jain, V K
2007-06-01
A study of the atmospheric particulate size distribution of total suspended particulate matter (TSPM) and associated heavy metal concentrations has been carried out for the city of Delhi. Urban particles were collected using a five-stage impactor at six sites in three different seasons, viz. winter, summer and monsoon in the year 2001. Five samples from each site in each season were collected. Each sample (filter paper) was extracted with a mixture of nitric acid, hydrochloric acid and hydrofluoric acid. The acid solutions of the samples were analysed in five-particle fractions by atomic absorption spectrometry (AAS). The impactor stage fractionation of particles shows that a major portion of TSPM concentration is in the form of PM0.7 (i.e. <0.7microm). Similarly, the most of the metal mass viz. Mn, Cr, Cd, Pb, Ni, and Fe are also concentrated in the PM0.7 mode. The only exceptions are size distributions pertaining to Cu and Ca. Though, Cu is more in PM0.7 mode, its presence in size intervals 5.4-1.6microm and 1.6-0.7microm is also significant, whilst in case of Ca there is no definite pattern in its distribution with size of particles. The average PM10.9 (i.e. <10.9microm) concentrations are approximately 90.2%+/-4.5%, 81.4%+/-1.4% and 86.4%+/-9.6% of TSPM for winter, summer and monsoon seasons, respectively. Source apportionment reveals that there are two sources of TSPM and PM10.9, while three and four sources were observed for PM1.6 (i.e. <1.6microm) and PM0.7, respectively. Results of regression analyses show definite correlations between PM10.9 and other fine size fractions, suggesting PM10.9 may adequately act as a surrogate for both PM1.6 and PM0.7, while PM1.6 may adequately act as a surrogate for PM0.7.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
Oxychlorine Species in Gale Crater and Broader Implications for Mars
NASA Technical Reports Server (NTRS)
Ming, Douglas W.; Sutter, Brad; Morris, Richard V.; Clark, B. C.; Mahaffy, P. H.; Archilles, C.; Wray, J. J.; Fairen, A. G.; Gellert, Ralf; Yen, Albert;
2017-01-01
Of 15 samples analyzed to date, the Sample Analysis at Mars (SAM) instrument on the Mars Science Laboratory (MSL) has detected oxychlorine compounds (perchlorate or chlorate) in 12 samples. The presence of oxychlorine species is inferred from the release of oxygen at temperatures less than 600degC and HCl between 350-850degC when a sample is heated to 850degC. The O2 release temperature varies with sample, likely caused by different cations, grain size differences, or catalytic effects of other minerals. In the oxychlorine-containing samples, perchlorate abundances range from 0.06 +/- 0.03 to 1.15 +/- 0.5 wt% Cl2O7 equivalent. Comparing these results to the elemental Cl concentration measured by the Alpha Particle X-ray Spectrometer (APXS) instrument, oxychlorine species account for 5-40% of the total Cl present. The variation in oxychlorine abundance has implications for their production and preservation over time. For example, the John Klein (JK) and Cumberland (CB) samples were acquired within a few meters of each other and CB contained approximately1.2 wt% Cl2O7 equivalent while JK had approximately 0.1 wt%. One difference between the two samples is that JK has a large number of veins visible in the drill hole wall, indicating more post-deposition alteration and removal. Finally, despite Cl concentrations similar to previous samples, the last three Murray formation samples (Oudam, Marimba, and Quela) had no detectable oxygen released during pyrolysis. This could be a result of oxygen reacting with other species in the sample during pyrolysis. Lab work has shown this is likely to have occurred in SAM but it is unlikely to have consumed all the O2 released. Another explanation is that the Cl is present as chlorides, which is consistent with data from the ChemCam (Chemical Camera) and CheMin (Chemistry and Mineralogy) instruments on MSL. For example, the Quela sample has approximately1 wt% elemental Cl detected by APXS, had no detectable O2 released, and halite (NaCl) has been tentatively identified in CheMin X-ray diffraction data. These data show that oxychlorines are likely globally distributed on Mars but the distribution is heterogenous depending on the perchlorate formation mechanism (production rate), burial, and subsequent diagenesis
A new evaluation method of electron optical performance of high beam current probe forming systems.
Fujita, Shin; Shimoyama, Hiroshi
2005-10-01
A new numerical simulation method is presented for the electron optical property analysis of probe forming systems with point cathode guns such as cold field emitters and the Schottky emitters. It has long been recognized that the gun aberrations are important parameters to be considered since the intrinsically high brightness of the point cathode gun is reduced due to its spherical aberration. The simulation method can evaluate the 'threshold beam current I(th)' above which the apparent brightness starts to decrease from the intrinsic value. It is found that the threshold depends on the 'electron gun focal length' as well as on the spherical aberration of the gun. Formulas are presented to estimate the brightness reduction as a function of the beam current. The gun brightness reduction must be included when the probe property (the relation between the beam current l(b) and the probe size on the sample, d) of the entire electron optical column is evaluated. Formulas that explicitly consider the gun aberrations into account are presented. It is shown that the probe property curve consists of three segments in the order of increasing beam current: (i) the constant probe size region, (ii) the brightness limited region where the probe size increases as d approximately I(b)(3/8), and (iii) the angular current intensity limited region in which the beam size increases rapidly as d approximately I(b)(3/2). Some strategies are suggested to increase the threshold beam current and to extend the effective beam current range of the point cathode gun into micro ampere regime.
3D-HST + CANDELS: the Evolution of the Galaxy Size-mass Distribution Since Z=3
NASA Technical Reports Server (NTRS)
VanDerWel, A.; Franx, M.; vanDokkum, P. G.; Skelton, R. E.; Momcheva, I. G.; Whitaker, K. E.; Brammer, G. B.; Bell, E. F.; Rix, H.-W.; Wuyts, S.;
2014-01-01
Spectroscopic and photometric redshifts, stellar mass estimates, and rest-frame colors from the 3D-HST survey are combined with structural parameter measurements from CANDELS imaging to determine the galaxy size-mass distribution over the redshift (z) range 0 < z < 3. Separating early- and late-type galaxies on the basis of star-formation activity, we confirm that early-type galaxies are on average smaller than late-type galaxies at all redshifts, and find a significantly different rate of average size evolution at fixed galaxy mass, with fast evolution for the early-type population, effective radius is in proportion to (1 + z) (sup -1.48), and moderate evolution for the late-type population, effective radius is in proportion to (1 + z) (sup -0.75). The large sample size and dynamic range in both galaxy mass and redshift, in combination with the high fidelity of our measurements due to the extensive use of spectroscopic data, not only fortify previous results, but also enable us to probe beyond simple average galaxy size measurements. At all redshifts the slope of the size-mass relation is shallow, effective radius in proportion to mass of a black hole (sup 0.22), for late-type galaxies with stellar mass > 3 x 10 (sup 9) solar masses, and steep, effective radius in proportion to mass of a black hole (sup 0.75), for early-type galaxies with stellar mass > 2 x 10 (sup 10) solar masses. The intrinsic scatter is approximately or less than 0.2 decimal exponents for all galaxy types and redshifts. For late-type galaxies, the logarithmic size distribution is not symmetric, but skewed toward small sizes: at all redshifts and masses a tail of small late-type galaxies exists that overlaps in size with the early-type galaxy population. The number density of massive (approximately 10 (sup 11) solar masses), compact (effective radius less than 2 kiloparsecs) early-type galaxies increases from z = 3 to z = 1.5 - 2 and then strongly decreases at later cosmic times.
Controlling the size of alginate gel beads by use of a high electrostatic potential.
Klokk, T I; Melvik, J E
2002-01-01
The effect of several parameters on the size of alginate beads produced by use of an electrostatic potential bead generator was examined. Parameters studied included needle diameter, electrostatic potential, alginate solution flow rate, gelling ion concentration and alginate concentration and viscosity, as well as alginate composition. Bead size was found to decrease with increasing electrostatic potential, but only down to a certain level. Minimum bead size was reached at between 2-4 kV/cm for the needles tested. The smallest alginate beads produced (using a needle with inner diameter 0.18 mm) had a mean diameter of approximately 300 microm. Bead size was also found to be dependent upon the flow rate of the fed alginate solution. Increasing the gelling ion concentration resulted in a moderate decrease in bead size. The concentration and viscosity of the alginate solution also had an effect on bead size as demonstrated by an increased bead diameter when the concentration or viscosity was increased. This effect was primarily an effect of the viscosity properties of the solution, which led to changes in the rate of droplet formation in the bead generator. Lowering the flow rate of the alginate solution could partly compensate for the increase in bead size with increased viscosity. For a constant droplet size, alginates with a low G block content (F(GG) approximately 0.20) resulted in approximately 30% smaller beads than alginates with a high G block content (F(GG) approximately 0.60). This is explained as a result of differences in the shrinking properties of the beads.
NASA Astrophysics Data System (ADS)
Wang, Xun; Ghidaoui, Mohamed S.
2018-07-01
This paper considers the problem of identifying multiple leaks in a water-filled pipeline based on inverse transient wave theory. The analytical solution to this problem involves nonlinear interaction terms between the various leaks. This paper shows analytically and numerically that these nonlinear terms are of the order of the leak sizes to the power two and; thus, negligible. As a result of this simplification, a maximum likelihood (ML) scheme that identifies leak locations and leak sizes separately is formulated and tested. It is found that the ML estimation scheme is highly efficient and robust with respect to noise. In addition, the ML method is a super-resolution leak localization scheme because its resolvable leak distance (approximately 0.15λmin , where λmin is the minimum wavelength) is below the Nyquist-Shannon sampling theorem limit (0.5λmin). Moreover, the Cramér-Rao lower bound (CRLB) is derived and used to show the efficiency of the ML scheme estimates. The variance of the ML estimator approximates the CRLB proving that the ML scheme belongs to class of best unbiased estimator of leak localization methods.
NASA Astrophysics Data System (ADS)
Furuta, T.; Maeyama, T.; Ishikawa, K. L.; Fukunishi, N.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Hayashi, S.
2015-08-01
In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.
Furuta, T; Maeyama, T; Ishikawa, K L; Fukunishi, N; Fukasaku, K; Takagi, S; Noda, S; Himeno, R; Hayashi, S
2015-08-21
In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.
Elemental mapping and microimaging by x-ray capillary optics.
Hampai, D; Dabagov, S B; Cappuccio, G; Longoni, A; Frizzi, T; Cibin, G; Guglielmotti, V; Sala, M
2008-12-01
Recently, many experiments have highlighted the advantage of using polycapillary optics for x-ray fluorescence studies. We have developed a special confocal scheme for micro x-ray fluorescence measurements that enables us to obtain not only elemental mapping of the sample but also simultaneously its own x-ray imaging. We have designed the prototype of a compact x-ray spectrometer characterized by a spatial resolution of less than 100 microm for fluorescence and less than 10 microm for imaging. A couple of polycapillary lenses in a confocal configuration together with a silicon drift detector allow elemental studies of extended samples (approximately 3 mm) to be performed, while a CCD camera makes it possible to record an image of the same samples with 6 microm spatial resolution, which is limited only by the pixel size of the camera. By inserting a compound refractive lens between the sample and the CCD camera, we hope to develop an x-ray microscope for more enlarged images of the samples under test.
Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.
Kubitzki, Marcus B; de Groot, Bert L
2007-06-15
Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.
Geng, Fengxia; Matsushita, Yoshitaka; Ma, Renzhi; Xin, Hao; Tanaka, Masahiko; Izumi, Fujio; Iyi, Nobuo; Sasaki, Takayoshi
2008-12-03
The synthesis process and crystal structure evolution for a family of stoichiometric layered rare-earth hydroxides with general formula Ln(8)(OH)(20)Cl(4) x nH(2)O (Ln = Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, and Y; n approximately 6-7) are described. Synthesis was accomplished through homogeneous precipitation of LnCl(3) x xH(2)O with hexamethylenetetramine to yield a single-phase product for Sm-Er and Y. Some minor coexisting phases were observed for Nd(3+) and Tm(3+), indicating a size limit for this layered series. Light lanthanides (Nd, Sm, Eu) crystallized into rectangular platelets, whereas platelets of heavy lanthanides from Gd tended to be of quasi-hexagonal morphology. Rietveld profile analysis revealed that all phases were isostructural in an orthorhombic layered structure featuring a positively charged layer, [Ln(8)(OH)(20)(H(2)O)(n)](4+), and interlayer charge-balancing Cl(-) ions. In-plane lattice parameters a and b decreased nearly linearly with a decrease in the rare-earth cation size. The interlamellar distance, c, was almost constant (approximately 8.70 A) for rare-earth elements Nd(3+), Sm(3+), and Eu(3+), but it suddenly decreased to approximately 8.45 A for Tb(3+), Dy(3+), Ho(3+), and Er(3+), which can be ascribed to two different degrees of hydration. Nd(3+) typically adopted a phase with high hydration, whereas a low-hydration phase was preferred for Tb(3+), Dy(3+), Ho(3+), Er(3+), and Tm(3+). Sm(3+), Eu(3+), and Gd(3+) samples were sensitive to humidity conditions because high- and low-hydration phases were interconvertible at a critical humidity of 10%, 20%, and 50%, respectively, as supported by both X-ray diffraction and gravimetry as a function of the relative humidity. In the phase conversion process, interlayer expansion or contraction of approximately 0.2 A also occurred as a possible consequence of absorption/desorption of H(2)O molecules. The hydration difference was also evidenced by refinement results. The number of coordinated water molecules per formula weight, n, changed from 6.6 for the high-hydration Gd sample to 6.0 for the low-hydration Gd sample. Also, the hydration number usually decreased with increasing atomic number; e.g., n = 7.4, 6.3, 7.2, and 6.6 for high-hydration Nd, Sm, Eu, and Gd, and n = 6.0, 5.8, 5.6, 5.4, and 4.9 for low-hydration Gd, Tb, Dy, Ho, and Er. The variation in the average Ln-O bond length with decreasing size of the lanthanide ions is also discussed. This family of layered lanthanide compounds highlights a novel chemistry of interplay between crystal structure stability and coordination geometry with water molecules.
Design and test of porous-tungsten mercury vaporizers
NASA Technical Reports Server (NTRS)
Kerslake, W. R.
1972-01-01
Future use of large size Kaufman thrusters and thruster arrays will impose new design requirements for porous plug type vaporizers. Larger flow rate coupled with smaller pores to prevent liquid intrusion will be desired. The results of testing samples of porous tungsten for flow rate, liquid intrusion pressure level, and mechanical strength are presented. Nitrogen gas was used in addition to mercury flow for approximate calibration. Liquid intrusion pressure levels will require that flight thruster systems with long feed lines have some way (a valve) to restrict dynamic line pressures during launch.
Aggregates and Superaggregates of Soot with Four Distinct Fractal Morphologies
NASA Technical Reports Server (NTRS)
Sorensen, C. M.; Kim, W.; Fry, D.; Chakrabarti, A.
2004-01-01
Soot formed in laminar diffusion flames of heavily sooting fuels evolves through four distinct growth stages which give rise to four distinct aggregate fractal morphologies. These results were inferred from large and small angle static light scattering from the flames, microphotography of the flames, and analysis of soot sampled from the flames. The growth stages occur approximately over four successive orders of magnitude in aggregate size. Comparison to computer simulations suggests that these four growth stages involve either diffusion limited cluster aggregation or percolation in either three or two dimensions.
Oxygen diffusion in nanocrystalline yttria-stabilized zirconia: the effect of grain boundaries.
De Souza, Roger A; Pietrowski, Martha J; Anselmi-Tamburini, Umberto; Kim, Sangtae; Munir, Zuhair A; Martin, Manfred
2008-04-21
The transport of oxygen in dense samples of yttria-stabilized zirconia (YSZ), of average grain size d approximately 50 nm, has been studied by means of 18O/16O exchange annealing and secondary ion mass spectrometry (SIMS). Oxygen diffusion coefficients (D*) and oxygen surface exchange coefficients (k*) were measured for temperatures 673
NASA Technical Reports Server (NTRS)
Okoro, Chika L.
2004-01-01
GRCop-84 was developed to meet the mechanical and thermal property requirements for advanced regeneratively cooled rocket engine main combustion chamber liners. It is a ternary Cu- Cr-Nb alloy having approximately 8 at% Cr and 4 at% Nb. The chromium and niobium constituents combine to form 14 vol% Cr2Nb, the strengthening phase. The alloy is made by producing GRCop-84 powder through gas atomization and consolidating the powder using extrusion, hot isostatic pressing (HIP) or vacuum plasma spraying (VPS). GRCop-84 has been selected by Rocketdyne, Ratt & Wlutney and Aerojet for use in their next generation of rocket engines. GRCop-84 demonstrates favorable mechanical and thermal properties at elevated temperatures. Compared to NARloy-Z, the currently used inaterial in the Space Shuttle, GRCop-84 has approximately twice the yield strength, 10-1000 times the creep life, and 1.5-2.5 times the low cycle fatigue life. The thermal expansion of GRCop-84 is 7515% less than NARloy-Z which minimizes thermally induced stresses. The thermal conductivity of the two alloys is comparable at low temperature but NARloy-Z has a 20-50 W/mK thermal conductivity advantage at typical rocket engine hot wall temperatures. GRCop-84 is also much more microstructurally stable than NARloy-Z which translates into better long term stability of mechanical properties. Previous research into metal alloys fabricated by means of powder metallurgy (PM), has demonstrated that initial powder size can affect the microstructural development and mechanical properties of such materials. Grain size, strength, ductility, size of second phases, etc., have all been shown to vary with starting powder size in PM-alloys. This work focuses on characterizing the effect of varying starting powder size on the microstructural evolution and mechanical properties of as- extruded GRCop-84. Tensile tests and constant load creep tests were performed on extrusions of four powder meshes: +140 mesh (great3er than l05 micron powder size), -140 mesh (less than or equal to 105 microns), -140 plus or minus 270 (53 - 105 microns), and - 270 mesh (less than or equal to 53 microns). Samples were tested in tension at room temperature and at 500 C (932 F). Creep tests were performed under vacuum at 500 C using a stress of 111 MPa (16.1 ksi). The fracture surfaces of selected samples from both tests were studied using a Scanning Electron Microscope (SEM). The as-extruded materials were also studied, using both optical microscopy and SEM analysis, to characterize changes within the microstructure.
An asymptotic analysis of the logrank test.
Strawderman, R L
1997-01-01
Asymptotic expansions for the null distribution of the logrank statistic and its distribution under local proportional hazards alternatives are developed in the case of iid observations. The results, which are derived from the work of Gu (1992) and Taniguchi (1992), are easy to interpret, and provide some theoretical justification for many behavioral characteristics of the logrank test that have been previously observed in simulation studies. We focus primarily upon (i) the inadequacy of the usual normal approximation under treatment group imbalance; and, (ii) the effects of treatment group imbalance on power and sample size calculations. A simple transformation of the logrank statistic is also derived based on results in Konishi (1991) and is found to substantially improve the standard normal approximation to its distribution under the null hypothesis of no survival difference when there is treatment group imbalance.
NASA Astrophysics Data System (ADS)
Alpers, C. N.; Marvin-DiPasquale, M. C.; Fleck, J.; Ackerman, J. T.; Eagles-Smith, C.; Stewart, A. R.; Windham-Myers, L.
2016-12-01
Many watersheds in the western U.S. have mercury (Hg) contamination from historical mining of Hg and precious metals (gold and silver), which were concentrated using Hg amalgamation (mid 1800's to early 1900's). Today, specialized sampling and analytical protocols for characterizing Hg and methylmercury (MeHg) in water, sediment, and biota generate high-quality data to inform management of land, water, and biological resources. Collection of vertically and horizontally integrated water samples in flowing streams and use of a Teflon churn splitter or cone splitter ensure that samples and subsamples are representative. Both dissolved and particulate components of Hg species in water are quantified because each responds to different hydrobiogeochemical processes. Suspended particles trapped on pre-combusted (Hg-free) glass- or quartz-fiber filters are analyzed for total mercury (THg), MeHg, and reactive divalent mercury. Filtrates are analyzed for THg and MeHg to approximate the dissolved fraction. The sum of concentrations in particulate and filtrate fractions represents whole water, equivalent to an unfiltered sample. This approach improves upon analysis of filtered and unfiltered samples and computation of particulate concentration by difference; volume filtered is adjusted based on suspended-sediment concentration to minimize particulate non-detects. Information from bed-sediment sampling is enhanced by sieving into multiple size fractions and determining detailed grain-size distribution. Wet sieving ensures particle disaggregation; sieve water is retained and fines are recovered by centrifugation. Speciation analysis by sequential extraction and examination of heavy mineral concentrates by scanning electron microscopy provide additional information regarding Hg mineralogy and geochemistry. Biomagnification of MeHg in food webs is tracked using phytoplankton, zooplankton, aquatic and emergent vegetation, invertebrates, fish, and birds. Analysis of zooplankton in multiple size fractions from multiple depths in reservoirs can provide insight into food-web dynamics. The presentation will highlight application of these methods in several Hg-contaminated watersheds, with emphasis on understanding seasonal variability in designing effective sampling strategies.
Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.
Gupte, M D; Narasimhamurthy, B
1999-06-01
In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that after validation under field conditions, this method can be considered for a rapid assessment of the leprosy situation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, H.
1999-03-31
The purpose of this research is to develop a multiplexed sample processing system in conjunction with multiplexed capillary electrophoresis for high-throughput DNA sequencing. The concept from DNA template to called bases was first demonstrated with a manually operated single capillary system. Later, an automated microfluidic system with 8 channels based on the same principle was successfully constructed. The instrument automatically processes 8 templates through reaction, purification, denaturation, pre-concentration, injection, separation and detection in a parallel fashion. A multiplexed freeze/thaw switching principle and a distribution network were implemented to manage flow direction and sample transportation. Dye-labeled terminator cycle-sequencing reactions are performedmore » in an 8-capillary array in a hot air thermal cycler. Subsequently, the sequencing ladders are directly loaded into a corresponding size-exclusion chromatographic column operated at {approximately} 60 C for purification. On-line denaturation and stacking injection for capillary electrophoresis is simultaneously accomplished at a cross assembly set at {approximately} 70 C. Not only the separation capillary array but also the reaction capillary array and purification columns can be regenerated after every run. DNA sequencing data from this system allow base calling up to 460 bases with accuracy of 98%.« less
Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A.; Robert, Christian P.; Marin, Jean-Michel; Balding, David J.; Guillemaud, Thomas; Estoup, Arnaud
2008-01-01
Summary: Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. Availability: The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc. Contact: j.cornuet@imperial.ac.uk Supplementary information: Supplementary data are also available at http://www.montpellier.inra.fr/CBGP/diyabc PMID:18842597
Medication Adherence and Health Insurance/Health Benefit in Adult Diabetics in Kingston, Jamaica.
Bridgelal-Nagassar, R J; James, K; Nagassar, R P; Maharaj, S
2015-05-15
To determine the association between health insurance/health benefit and medication adherence amongst adult diabetic patients in Kingston, Jamaica. This was a cross-sectional study. The target population was diabetics who attended the diabetic outpatient clinics in health centres in Kingston. Two health centres were selectively chosen in Kingston. All diabetic patients attending the diabetic clinics and over the age of 18 years were conveniently sampled. The sample size was 260. An interviewer-administered questionnaire was utilized which assessed health insurance/health benefit. Adherence was measured by patients' self-reports of medication usage in the previous week. The Chi-squared test was used to determine the significance of associations. Sample population was 76% female and 24% male. Type 2 diabetics comprised 93.8%. More than 95% of patients were over the age of 40 years. Approximately 32% of participants were employed. Approximately 75% of patients had health insurance/health benefit. Among those who had health insurance or health benefit, 71.5% were adherent and 28.5% were non-adherent. This difference was statistically significant (χ2 = 6.553, p = 0.01). Prevalence of medication non-adherence was 33%. AIn Kingston, diabetic patients who are adherent are more likely to have health insurance/health benefit ( p = 0.01).
NASA Astrophysics Data System (ADS)
Siddique, M. Naseem; Ahmed, Ateeq; Ali, T.; Tripathi, P.
2018-05-01
Nickel oxide (NiO) nanoparticles with a crystal size of around 16.26 nm have been synthesized via sol-gel method. The synthesized precursor was calcined at 600 °C for 4 hours to obtain the nickel oxide nanoparticles. The XRD analysis result indicated that the calcined sample has a cubic structure without any impurity phases. The FTIR analysis result confirmed the formation of NiO. The NiO nanoparticle exhibited absorption band edge at 277.27 nm and the optical band gap have been estimated approximately 4.47 eV using diffuse reflectance spectroscopy and photoluminescence emission spectrum of our as-synthesized sample showed strong peak at 3.65 eV attributed to the band edge transition.
GMR effect in CuCo annealed melt-spun ribbons.
Murillo, N; Grande, H; Etxeberria, I; Del Val, J J; González, J; Arana, S; Gracia, F J
2004-11-01
A thorough microstructural and magnetic analysis has been performed on as-quenched and annealed (475 and 525 degrees C, 1 hour) melt-spun Cu100-xCox (x = 10 and 15) granular alloys, presenting a giant magnetoresistance (GMR) effect. The annealed samples are inhomogeneous with respect to the Co-particle sizes and interparticles distances and, therefore, these particles present superparamagnetic and ferromagnetic behaviours, which determine the GMR response. The samples x = 15, treated at 525 degrees C during 1 hour, presented the best GMR ratio (approximately 5% at room temperature to be the highest value approaching roughly to the saturation under an applied magnetic field of 15 KOe), with the coexistence of Co-particles with both kinds of magnetic behaviour.
Pfefferkorn, Frank E; Bello, Dhimiter; Haddad, Gilbert; Park, Ji-Young; Powell, Maria; McCarthy, Jon; Bunker, Kristin Lee; Fehrenbacher, Axel; Jeon, Yongho; Virji, M Abbas; Gruetzmacher, George; Hoover, Mark D
2010-07-01
Friction stir welding (FSW) is considered one of the most significant developments in joining technology over the last half century. Its industrial applications are growing steadily and so are the number of workers using this technology. To date, there are no reports on airborne exposures during FSW. The objective of this study was to investigate possible emissions of nanoscale (<100 nm) and fine (<1 microm) aerosols during FSW of two aluminum alloys in a laboratory setting and characterize their physicochemical composition. Several instruments measured size distributions (5 nm to 20 microm) with 1-s resolution, lung deposited surface areas, and PM(2.5) concentrations at the source and at the breathing zone (BZ). A wide range aerosol sampling system positioned at the BZ collected integrated samples in 12 stages (2 nm to 20 microm) that were analyzed for several metals using inductively coupled plasma mass spectrometry. Airborne aerosol was directly collected onto several transmission electron microscope grids and the morphology and chemical composition of collected particles were characterized extensively. FSW generates high concentrations of ultrafine and submicrometer particles. The size distribution was bimodal, with maxima at approximately 30 and approximately 550 nm. The mean total particle number concentration at the 30 nm peak was relatively stable at approximately 4.0 x 10(5) particles cm(-3), whereas the arithmetic mean counts at the 550 nm peak varied between 1500 and 7200 particles cm(-3), depending on the test conditions. The BZ concentrations were lower than the source concentrations by 10-100 times at their respective peak maxima and showed higher variability. The daylong average metal-specific concentrations were 2.0 (Zn), 1.4 (Al), and 0.24 (Fe) microg m(-3); the estimated average peak concentrations were an order of magnitude higher. Potential for significant exposures to fine and ultrafine aerosols, particularly of Al, Fe, and Zn, during FSW may exist, especially in larger scale industrial operations.
Self-healing coatings containing microcapsule
NASA Astrophysics Data System (ADS)
Zhao, Yang; Zhang, Wei; Liao, Le-ping; Wang, Si-jie; Li, Wu-jun
2012-01-01
Effectiveness of epoxy resin filled microcapsules was investigated for healing of cracks generated in coatings. Microcapsules were prepared by in situ polymerization of urea-formaldehyde resin to form shell over epoxy resin droplets. Characteristics of these capsules were studied by 3D measuring laser microscope, particle size analyzer, Fourier-transform infrared spectroscopy (FTIR) and differential scanning calorimeter (DSC) to investigate their surface morphology, size distribution, chemical structure and thermal stability, respectively. The results indicate that microcapsules containing epoxy resins can be synthesized successfully. The size is around 100 μm. The rough outer surface of microcapsule is composed of agglomerated urea-formaldehyde nanoparticles. The size and surface morphology of microcapsule can be controlled by selecting different processing parameters. The microcapsules basically exhibit good storage stability at room temperature, and they are chemically stable before the heating temperature is up to approximately 200 °C. The model system of self-healing coating consists of epoxy resin matrix, 10 wt% microencapsulated healing agent, 2 wt% catalyst solution. The self-healing function of this coating system is evaluated through self-healing testing of damaged and healed coated steel samples.
Airborne particulate matter and spacecraft internal environments
NASA Technical Reports Server (NTRS)
Liu, Benjamin Y. H.; Rubow, Kenneth L.; Mcmurry, Peter H.; Kotz, Thomas J.; Russo, Dane
1991-01-01
Instrumentation, consisting of a Shuttle Particle Sampler (SPS) and a Shuttle Particle Monitor (SPM), has been developed to characterize the airborne particulate matter in the Space Shuttle cabin during orbital flight. The SPS size selectively collects particles in four size fractions (0-2.5, 2.5-10, 10-100, and greater than 100 microns) which are analyzed postflight for mass concentration and size distribution, elemental composition, and morphology. The SPM provides a continuous record of particle concentration through photometric light scattering. Measurements were performed onboard Columbia, OV-102, during the flight of STS-32 in January 1990. No significant changes were observed in the particle mass concentration, size distribution, or chemical composition in samples collected during flight-day 2 and flight-day 7. The total mass concentration was 56 microg/cu cm with approximately half of the particles larger than 100 microns. Elemental analysis showed that roughly 70 percent of the particles larger than 2.5 microns were carbonaceous with small amounts of other elements present. The SPM showed no temporal or spatial variation in particle mass concentration during the mission.
Aerosol Transport to the Greenland Summit Site, June, 2003 to August 2004
NASA Astrophysics Data System (ADS)
Cahill, T. A.; Cliff, S. S.; Jimenez-Cruz, M. P.; Portnoff, L.; Perry, K.; McConnell, J.; Burkhart, J.; Bales, R. C.
2004-12-01
With the resumption of year-round staffing of the Summit Greenland Environmental Observatory (GEOSummit) in 2003, we were able to sample aerosols year round by size (8 size modes), time (3 hr to 24 hr), and composition (mass, optical attenuation, and elements H, Na to Mo, plus lead) for association with particulate layers in snow, firn and ice. Sampling was accomplished using a 10 L/min slotted 8-stage rotating drum impactor (DELTA 8 DRUM, http://delta.ucdavis.edu)in the clean sector 0.5 km upwind from the main camp pollution sources. The air intake was approximately 2m above the snow surface. The rotation rate of the DRUM was slowed to 0.5 mm/day, allowing continuous sampling for 48 weeks with 12-hr time resolution on a single set of lightly greased 480 ?g/cm2 Mylar substrates. Early results show transport of relatively coarse (12 to 5 ?m aerodynamic diameter) soil aerosols to the site in spring, 2003, in well -defined plumes of 1- to 2-day duration. Trajectory analysis shows potential Asian sources. Sulfur-containing aerosols, also seen in plumes of short duration, occur in two size modes, a typical accumulation mode aerosol (0.75?0.34 ?m) and a very fine aerosol mode ( 0.34?0.09 ?m), the latter likely stratospheric in origin. We wish to acknowledge the excellent on-site support of the GEOSummit staff, including M. Lewis, R. Abbott, B. Torrison, and K. Hess, and T. Wood.
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
Measuring the X-shaped structures in edge-on galaxies
NASA Astrophysics Data System (ADS)
Savchenko, S. S.; Sotnikova, N. Ya.; Mosenkov, A. V.; Reshetnikov, V. P.; Bizyaev, D. V.
2017-11-01
We present a detailed photometric study of a sample of 22 edge-on galaxies with clearly visible X-shaped structures. We propose a novel method to derive geometrical parameters of these features, along with the parameters of their host galaxies based on the multi-component photometric decomposition of galactic images. To include the X-shaped structure into our photometric model, we use the imfit package, in which we implement a new component describing the X-shaped structure. This method is applied for a sample of galaxies with available Sloan Digital Sky Survey and Spitzer IRAC 3.6 μm observations. In order to explain our results, we perform realistic N-body simulations of a Milky Way-type galaxy and compare the observed and the model X-shaped structures. Our main conclusions are as follows: (1) galaxies with strong X-shaped structures reside in approximately the same local environments as field galaxies; (2) the characteristic size of the X-shaped structures is about 2/3 of the bar size; (3) there is a correlation between the X-shaped structure size and its observed flatness: the larger structures are more flattened; (4) our N-body simulations qualitatively confirm the observational results and support the bar-driven scenario for the X-shaped structure formation.
Measurement of Circumstellar Disk Sizes in the Upper Scorpius OB Association with ALMA
NASA Astrophysics Data System (ADS)
Barenfeld, Scott A.; Carpenter, John M.; Sargent, Anneila I.; Isella, Andrea; Ricci, Luca
2017-12-01
We present detailed modeling of the spatial distributions of gas and dust in 57 circumstellar disks in the Upper Scorpius OB Association observed with ALMA at submillimeter wavelengths. We fit power-law models to the dust surface density and CO J = 3–2 surface brightness to measure the radial extent of dust and gas in these disks. We found that these disks are extremely compact: the 25 highest signal-to-noise disks have a median dust outer radius of 21 au, assuming an {R}-1 dust surface density profile. Our lack of CO detections in the majority of our sample is consistent with these small disk sizes assuming the dust and CO share the same spatial distribution. Of seven disks in our sample with well-constrained dust and CO radii, four appear to be more extended in CO, although this may simply be due to the higher optical depth of the CO. Comparison of the Upper Sco results with recent analyses of disks in Taurus, Ophiuchus, and Lupus suggests that the dust disks in Upper Sco may be approximately three times smaller in size than their younger counterparts, although we caution that a more uniform analysis of the data across all regions is needed. We discuss the implications of these results for disk evolution.
MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes
NASA Technical Reports Server (NTRS)
Allen, Carlton C.
2011-01-01
The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by mass of each sample should be preserved to support future scientific investigations. Samples of 15-16 grams are considered optimal. The total mass of returned rocks, soils, blanks and standards should be approximately 500 grams. Atmospheric gas samples should be the equivalent of 50 cubic cm at 20 times Mars ambient atmospheric pressure.
as response to seasonal variability
Badano, Ernesto I; Labra, Fabio A; Martínez-Pérez, Cecilia G; Vergara, Carlos H
2016-03-01
Ecologists have been largely interested in the description and understanding of the power scaling relationships between body size and abundance of organisms. Many studies have focused on estimating the exponents of these functions across taxonomic groups and spatial scales, to draw inferences about the processes underlying this pattern. The exponents of these functions usually approximate -3/4 at geographical scales, but they deviate from this value when smaller spatial extensions are considered. This has led to propose that body size-abundance relationships at small spatial scales may reflect the impact of environmental changes. This study tests this hypothesis by examining body size spectra of benthic shrimps (Decapoda: Caridea) and snails (Gastropoda) in the Tamiahua lagoon, a brackish body water located in the Eastern coast of Mexico. We mea- sured water quality parameters (dissolved oxygen, salinity, pH, water temperature, sediment organic matter and chemical oxygen demand) and sampled benthic macrofauna during three different climatic conditions of the year (cold, dry and rainy season). Given the small size of most individuals in the benthic macrofaunal samples, we used body volume, instead of weight, to estimate their body size. Body size-abundance relationships of both taxonomic groups were described by tabulating data from each season into base-2 logarithmic body size bins. In both taxonomic groups, observed frequencies per body size class in each season were standardized to yield densities (i.e., individuals/m(3)). Nonlinear regression analyses were separately performed for each taxonomic group at each season to assess whether body size spectra followed power scaling functions. Additionally, for each taxonomic group, multiple regression analyses were used to determine whether these relationships varied among seasons. Our results indicated that, while body size-abundance relationships in both taxonomic groups followed power functions, the parameters defining the shape of these relationships varied among seasons. These variations in the parameters of the body size-abundance relationships seems to be related to changes in the abundance of individuals within the different body size classes, which seems to follow the seasonal changes that occur in the environmental conditions of the lagoon. Thus, we propose that these body size-abundance relation- ships are influenced by the frequency and intensity of environmental changes affecting this ecosystem.
Feltus, F Alex; Ficklin, Stephen P; Gibson, Scott M; Smith, Melissa C
2013-06-05
In genomics, highly relevant gene interaction (co-expression) networks have been constructed by finding significant pair-wise correlations between genes in expression datasets. These networks are then mined to elucidate biological function at the polygenic level. In some cases networks may be constructed from input samples that measure gene expression under a variety of different conditions, such as for different genotypes, environments, disease states and tissues. When large sets of samples are obtained from public repositories it is often unmanageable to associate samples into condition-specific groups, and combining samples from various conditions has a negative effect on network size. A fixed significance threshold is often applied also limiting the size of the final network. Therefore, we propose pre-clustering of input expression samples to approximate condition-specific grouping of samples and individual network construction of each group as a means for dynamic significance thresholding. The net effect is increase sensitivity thus maximizing the total co-expression relationships in the final co-expression network compendium. A total of 86 Arabidopsis thaliana co-expression networks were constructed after k-means partitioning of 7,105 publicly available ATH1 Affymetrix microarray samples. We term each pre-sorted network a Gene Interaction Layer (GIL). Random Matrix Theory (RMT), an un-supervised thresholding method, was used to threshold each of the 86 networks independently, effectively providing a dynamic (non-global) threshold for the network. The overall gene count across all GILs reached 19,588 genes (94.7% measured gene coverage) and 558,022 unique co-expression relationships. In comparison, network construction without pre-sorting of input samples yielded only 3,297 genes (15.9%) and 129,134 relationships. in the global network. Here we show that pre-clustering of microarray samples helps approximate condition-specific networks and allows for dynamic thresholding using un-supervised methods. Because RMT ensures only highly significant interactions are kept, the GIL compendium consists of 558,022 unique high quality A. thaliana co-expression relationships across almost all of the measurable genes on the ATH1 array. For A. thaliana, these networks represent the largest compendium to date of significant gene co-expression relationships, and are a means to explore complex pathway, polygenic, and pleiotropic relationships for this focal model plant. The networks can be explored at sysbio.genome.clemson.edu. Finally, this method is applicable to any large expression profile collection for any organism and is best suited where a knowledge-independent network construction method is desired.
2013-01-01
Background In genomics, highly relevant gene interaction (co-expression) networks have been constructed by finding significant pair-wise correlations between genes in expression datasets. These networks are then mined to elucidate biological function at the polygenic level. In some cases networks may be constructed from input samples that measure gene expression under a variety of different conditions, such as for different genotypes, environments, disease states and tissues. When large sets of samples are obtained from public repositories it is often unmanageable to associate samples into condition-specific groups, and combining samples from various conditions has a negative effect on network size. A fixed significance threshold is often applied also limiting the size of the final network. Therefore, we propose pre-clustering of input expression samples to approximate condition-specific grouping of samples and individual network construction of each group as a means for dynamic significance thresholding. The net effect is increase sensitivity thus maximizing the total co-expression relationships in the final co-expression network compendium. Results A total of 86 Arabidopsis thaliana co-expression networks were constructed after k-means partitioning of 7,105 publicly available ATH1 Affymetrix microarray samples. We term each pre-sorted network a Gene Interaction Layer (GIL). Random Matrix Theory (RMT), an un-supervised thresholding method, was used to threshold each of the 86 networks independently, effectively providing a dynamic (non-global) threshold for the network. The overall gene count across all GILs reached 19,588 genes (94.7% measured gene coverage) and 558,022 unique co-expression relationships. In comparison, network construction without pre-sorting of input samples yielded only 3,297 genes (15.9%) and 129,134 relationships. in the global network. Conclusions Here we show that pre-clustering of microarray samples helps approximate condition-specific networks and allows for dynamic thresholding using un-supervised methods. Because RMT ensures only highly significant interactions are kept, the GIL compendium consists of 558,022 unique high quality A. thaliana co-expression relationships across almost all of the measurable genes on the ATH1 array. For A. thaliana, these networks represent the largest compendium to date of significant gene co-expression relationships, and are a means to explore complex pathway, polygenic, and pleiotropic relationships for this focal model plant. The networks can be explored at sysbio.genome.clemson.edu. Finally, this method is applicable to any large expression profile collection for any organism and is best suited where a knowledge-independent network construction method is desired. PMID:23738693
Pituitary gland volumes in bipolar disorder.
Clark, Ian A; Mackay, Clare E; Goodwin, Guy M
2014-12-01
Bipolar disorder has been associated with increased Hypothalamic-Pituitary-Adrenal axis function. The mechanism is not well understood, but there may be associated increases in pituitary gland volume (PGV) and these small increases may be functionally significant. However, research investigating PGV in bipolar disorder reports mixed results. The aim of the current study was twofold. First, to assess PGV in two novel samples of patients with bipolar disorder and matched healthy controls. Second, to perform a meta-analysis comparing PGV across a larger sample of patients and matched controls. Sample 1 consisted of 23 established patients and 32 matched controls. Sample 2 consisted of 39 medication-naïve patients and 42 matched controls. PGV was measured on structural MRI scans. Seven further studies were identified comparing PGV between patients and matched controls (total n; 244 patients, 308 controls). Both novel samples showed a small (approximately 20mm(3) or 4%), but non-significant, increase in PGV in patients. Combining the two novel samples showed a significant association of age and PGV. Meta-analysis showed a trend towards a larger pituitary gland in patients (effect size: .23, CI: -.14, .59). While results suggest a possible small difference in pituitary gland volume between patients and matched controls, larger mega-analyses with sample sizes greater even than those used in the current meta-analysis are still required. There is a small but potentially functionally significant increase in PGV in patients with bipolar disorder compared to controls. Results demonstrate the difficulty of finding potentially important but small effects in functional brain disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorenz, Matthias; Ovchinnikova, Olga S; Van Berkel, Gary J
RATIONALE: Laser ablation provides for the possibility of sampling a large variety of surfaces with high spatial resolution. This type of sampling when employed in conjunction with liquid capture followed by nanoelectrospray ionization provides the opportunity for sensitive and prolonged interrogation of samples by mass spectrometry as well as the ability to analyze surfaces not amenable to direct liquid extraction. METHODS: A fully automated, reflection geometry, laser ablation liquid capture spot sampling system was achieved by incorporating appropriate laser fiber optics and a focusing lens into a commercially available, liquid extraction surface analysis (LESA ) ready Advion TriVersa NanoMate system.more » RESULTS: Under optimized conditions about 10% of laser ablated material could be captured in a droplet positioned vertically over the ablation region using the NanoMate robot controlled pipette. The sampling spot size area with this laser ablation liquid capture surface analysis (LA/LCSA) mode of operation (typically about 120 m x 160 m) was approximately 50 times smaller than that achievable by direct liquid extraction using LESA (ca. 1 mm diameter liquid extraction spot). The set-up was successfully applied for the analysis of ink on glass and paper as well as the endogenous components in Alstroemeria Yellow King flower petals. In a second mode of operation with a comparable sampling spot size, termed laser ablation/LESA , the laser system was used to drill through, penetrate, or otherwise expose material beneath a solvent resistant surface. Once drilled, LESA was effective in sampling soluble material exposed at that location on the surface. CONCLUSIONS: Incorporating the capability for different laser ablation liquid capture spot sampling modes of operation into a LESA ready Advion TriVersa NanoMate enhanced the spot sampling spatial resolution of this device and broadened the surface types amenable to analysis to include absorbent and solvent resistant materials.« less
NASA Astrophysics Data System (ADS)
Pejova, Biljana
2014-05-01
Raman scattering in combination with optical spectroscopy and structural studies by X-ray diffraction was employed to investigate the phonon confinement and strain-induced effects in 3D assemblies of variable-size zincblende ZnSe quantum dots close packed in thin film form. Nanostructured thin films were synthesized by colloidal chemical approach, while tuning of the nanocrystal size was enabled by post-deposition thermal annealing treatment. In-depth insights into the factors governing the observed trends of the position and half-width of the 1LO band as a function of the average QD size were gained. The overall shifts in the position of 1LO band were found to result from an intricate compromise between the influence of phonon confinement and lattice strain-induced effects. Both contributions were quantitatively and exactly modeled. Accurate assignments of the bands due to surface optical (SO) modes as well as of the theoretically forbidden transverse optical (TO) modes were provided, on the basis of reliable physical models (such as the dielectric continuum model of Ruppin and Englman). The size-dependence of the ratio of intensities of the TO and LO modes was studied and discussed as well. Relaxation time characterizing the phonon decay processes in as-deposited samples was found to be approximately 0.38 ps, while upon post-deposition annealing already at 200 °C it increases to about 0.50 ps. Both of these values are, however, significantly smaller than those characteristic for a macrocrystalline ZnSe sample.
NASA Technical Reports Server (NTRS)
Kearsley, A. T.; Burchell, M. J.; Horz, F.; Cole, M. J.; Schwandt, C. S.
2006-01-01
Metallic aluminium alloy foils exposed on the forward, comet-facing surface of the aerogel tray on the Stardust spacecraft are likely to have been impacted by the same cometary particle population as the dedicated impact sensors and the aerogel collector. The ability of soft aluminium alloy to record hypervelocity impacts as bowl-shaped craters offers an opportunistic substrate for recognition of impacts by particles of a wide potential size range. In contrast to impact surveys conducted on samples from low Earth orbit, the simple encounter geometry for Stardust and Wild 2, with a known and constant spacecraft-particle relative velocity and effective surface-perpendicular impact trajectories, permits closely comparable simulation in laboratory experiments. For a detailed calibration programme we have selected a suite of spherical glass projectiles of uniform density and hardness characteristics, with well-documented particle size range from 10 microns to nearly 100 microns. Light gas gun buckshot firings of these particles at approximately 6km s)exp -1) onto samples of the same foil as employed on Stardust have yielded large numbers of craters. Scanning electron microscopy of both projectiles and impact features has allowed construction of a calibration plot, showing a linear relationship between impacting particle size and impact crater diameter. The close match between our experimental conditions and the Stardust mission encounter parameters should provide another opportunity to measure particle size distributions and fluxes close to the nucleus of Wild 2, independent of the active impact detector instruments aboard the Stardust spacecraft.
Co-Precipitation Synthesis and Characterization of SrBi2Ta2O9 Ceramic
NASA Astrophysics Data System (ADS)
Afqir, Mohamed; Tachafine, Amina; Fasquelle, Didier; Elaatmani, Mohamed; Carru, Jean-Claude; Zegzouti, Abdelouahad; Daoud, Mohamed
2018-04-01
Strontium bismuth tantalate (SrBi2Ta2O9) was synthesized by a co-precipitation method. The sample was characterized by x-ray powder diffraction patterns (XRD), Fourier-transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM). The results of the dielectric properties are reported at room temperature. No secondary phases were found while heating the powder at 850°C and the pure SrBi2Ta2O9 phase was formed, as revealed by XRD. The characteristic bands for SrBi2Ta2O9 were observed by FTIR at approximately 619 cm-1 and 810 cm-1. SEM micrographs for the sample displayed thin plate-like grains. The grain size was less than 1 μm and the crystallite size of about 24 nm. Dielectric response at room temperature shows that the SrBi2Ta2O9 ceramic has low loss values, and the flattening of the dielectric constant at higher frequencies. The observed Curie temperature is comparable with those reported in the literature.
Aad, G.; Abbott, B.; Abdallah, J.; ...
2015-10-01
The paper presents studies of Bose–Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range pT> 100 MeV and |η|< 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 μb -1, 190 μb -1 and 12.4 nb -1 for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. Amore » saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. In conclusion, the dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abbott, B.; Abdallah, J.
The paper presents studies of Bose–Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range pT> 100 MeV and |η|< 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 μb -1, 190 μb -1 and 12.4 nb -1 for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. Amore » saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. In conclusion, the dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.« less
Phononic thermal conductivity in silicene: the role of vacancy defects and boundary scattering
NASA Astrophysics Data System (ADS)
Barati, M.; Vazifehshenas, T.; Salavati-fard, T.; Farmanbar, M.
2018-04-01
We calculate the thermal conductivity of free-standing silicene using the phonon Boltzmann transport equation within the relaxation time approximation. In this calculation, we investigate the effects of sample size and different scattering mechanisms such as phonon–phonon, phonon-boundary, phonon-isotope and phonon-vacancy defect. We obtain some similar results to earlier works using a different model and provide a more detailed analysis of the phonon conduction behavior and various mode contributions. We show that the dominant contribution to the thermal conductivity of silicene, which originates from the in-plane acoustic branches, is about 70% at room temperature and this contribution becomes larger by considering vacancy defects. Our results indicate that while the thermal conductivity of silicene is significantly suppressed by the vacancy defects, the effect of isotopes on the phononic transport is small. Our calculations demonstrate that by removing only one of every 400 silicon atoms, a substantial reduction of about 58% in thermal conductivity is achieved. Furthermore, we find that the phonon-boundary scattering is important in defectless and small-size silicene samples, especially at low temperatures.
Permeability and compressibility of resedimented Gulf of Mexico mudrock
NASA Astrophysics Data System (ADS)
Betts, W. S.; Flemings, P. B.; Schneider, J.
2011-12-01
We use a constant-rate-of strain consolidation test on resedimented Gulf of Mexico mudrock to determine the compression index (Cc) to be 0.618 and the expansion index (Ce) to be 0.083. We used crushed, homogenized Pliocene and Pleistocene mudrock extracted from cored wells in the Eugene Island block 330 oil field. This powdered material has a liquid limit (LL) of 87, a plastic limit (PL) of 24, and a plasticity index (PI) of 63. The particle size distribution from hydrometer analyses is approximately 65% clay-sized particles (<2 μm) with the remainder being less than 70 microns in diameter. Resedimented specimens have been used to characterize the geotechnical and geophysical behavior of soils and mudstones independent of the variability of natural samples and without the effects of sampling disturbance. Previous investigations of resedimented offshore Gulf of Mexico sediments (e.g. Mazzei, 2008) have been limited in scope. This is the first test of the homogenized Eugene Island core material. These results will be compared to in situ measurements to determine the controls on consolidation over large stress ranges.
Synthesis of nano-forsterite powder by making use of natural silica sand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nurbaiti, Upik, E-mail: upik-nurbaiti@mail.unnes.ac.id; Department of Physics, Faculty of Mathematics and Natural Sciences Semarang State University Jl. Raya Sekaran GunungPati, Semarang 50221; Suud, Fikriyatul Azizah
2016-02-08
Nano-forsterite powder with natural silica sand and magnesium powder as the raw materials have been succesfully synthesized. The silica sand was purified followed by a coprecipitation process to obtain colloidal silica. The magnesium powder was dissolved in a chloric acid solution to obtain MgCl{sub 2} solution. The nanoforsterite powder was synthesised using a sol-gel method which included the mixing the colloidal silica and the MgCl{sub 2} solution with various aging and filtering processes. The samples were dried at 100 °C using a hot plate and then the dried powders were calcinated at 900 °C for 2 hours. The samples weremore » characetised for their elements and phase compositions using X-ray Flourescence (XRF) and X-ray Diffraction (XRD) methods, respectively. The diffraction data were qualitatively analyzed using Match!2 software and quantitatively using Rietica software. The crystallite size was verified using Transmission Electron Microscopy (TEM). Results of XRD data analysis showed that the forsterite content reached up to 90.5% wt. The TEM average crystallite size was approximately 53(6) nm.« less
Siddiq, Abdur R; Kennedy, Andrew R
2015-02-01
Porous PEEK structures with approximately 85% open porosity have been made using PEEK-OPTIMA® powder and a particulate leaching technique using porous, near-spherical, sodium chloride beads. A novel manufacturing approach is presented and compared with a traditional dry mixing method. Irrespective of the method used, the use of near-spherical beads with a fairly narrow size range results in uniform pore structures. However the integration, by tapping, of fine PEEK into a pre-existing network salt beads, followed by compaction and "sintering", produces porous structures with excellent repeatability and homogeneity of density; more uniform pore and strut sizes; an improved and predictable level of connectivity via the formation of "windows" between the cells; faster salt removal rates and lower levels of residual salt. Although tapped samples show a compressive yield stress >1 MPa and stiffness >30 MPa for samples with 84% porosity, the presence of windows in the cell walls means that tapped structures show lower strengths and lower stiffnesses than equivalent structures made by mixing. Copyright © 2014 Elsevier B.V. All rights reserved.
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.
Metapopulation models for historical inference.
Wakeley, John
2004-04-01
The genealogical process for a sample from a metapopulation, in which local populations are connected by migration and can undergo extinction and subsequent recolonization, is shown to have a relatively simple structure in the limit as the number of populations in the metapopulation approaches infinity. The result, which is an approximation to the ancestral behaviour of samples from a metapopulation with a large number of populations, is the same as that previously described for other metapopulation models, namely that the genealogical process is closely related to Kingman's unstructured coalescent. The present work considers a more general class of models that includes two kinds of extinction and recolonization, and the possibility that gamete production precedes extinction. In addition, following other recent work, this result for a metapopulation divided into many populations is shown to hold both for finite population sizes and in the usual diffusion limit, which assumes that population sizes are large. Examples illustrate when the usual diffusion limit is appropriate and when it is not. Some shortcomings and extensions of the model are considered, and the relevance of such models to understanding human history is discussed.
Exposure of miners to diesel exhaust particulates in underground nonmetal mines.
Cohen, H J; Borak, J; Hall, T; Sirianni, G; Chemerynski, S
2002-01-01
A study was initiated to examine worker exposures in seven underground nonmetal mines and to examine the precision of the National Institute for Occupational Safety and Health (NIOSH) 5040 sampling and analytical method for diesel exhaust that has recently been adopted for compliance monitoring by the Mine Safety and Health Administration (MSHA). Approximately 1000 air samples using cyclones were taken on workers and in areas throughout the mines. Results indicated that worker exposures were consistently above the MSHA final limit of 160 micrograms/m3 (time-weighted average; TWA) for total carbon as determined by the NIOSH 5040 method and greater than the proposed American Conference of Governmental Industrial Hygienists TLV limit of 20 micrograms/m3 (TWA) for elemental carbon. A number of difficulties were documented when sampling for diesel exhaust using organic carbon: high and variable blank values from filters, a high variability (+/- 20%) from duplicate punches from the same sampling filter, a consistent positive interference (+26%) when open-faced monitors were sampled side-by-side with cyclones, poor correlation (r 2 = 0.38) to elemental carbon levels, and an interference from limestone that could not be adequately corrected by acid-washing of filters. The sampling and analytical precision (relative standard deviation) was approximately 11% for elemental carbon, 17% for organic carbon, and 11% for total carbon. An hypothesis is presented and supported with data that gaseous organic carbon constituents of diesel exhaust adsorb onto not only the submicron elemental carbon particles found in diesel exhaust, but also mining ore dusts. Such mining dusts are mostly nonrespirable and should not be considered equivalent to submicron diesel particulates in their potential for adverse pulmonary effects. It is recommended that size-selective sampling be employed, rather than open-faced monitoring, when using the NIOSH 5040 method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, T. R.; Tsai, H.; Cole, J. I.
2002-09-17
To assess the effects of long-term, low-dose-rate neutron exposure on mechanical strength and ductility, tensile properties were measured on 12% and 20% cold-worked Type 316 stainless steel. Samples were prepared from reactor core components retrieved from the EBR-II reactor following final shutdown. Sample locations were chosen to cover a dose range of 1-56 dpa at temperatures from 371-440 C and dose rates from 0.5-5.8 x10{sup -7} dpa/s. These dose rates are approximately an order of magnitude lower than those of typical EBR-II test sample locations. The tensile tests for the 12% CW material were performed at 380 C and 430more » C while those for the 20% CW samples were performed at 370 C. In each case, the tensile test temperature approximately matched the irradiation temperature. To help understand the tensile properties, microstructural samples with similar irradiation history were also examined. The strength and loss of work hardening increase the fastest as a function of irradiation dose for the 12% CW material irradiated at lower temperature. The decrease in ductility with increasing dose occurs more rapidly for the 12% CW material irradiated at lower temperature and the 20% cold-worked material. Post-tensile test fractography indicates that at higher dose, the 20% CW samples begin a shift in fracture mode from purely ductile to mainly small facets and slip bands, suggesting a transition toward channel fracture. The fracture for all of the 12% cold-worked samples was ductile. For both the 12% and 20% CW materials, the yield strength increases correlate with changes in void and loop density and size.« less
Wang, Lulu; Ma, Yingying; Gu, Yu; Liu, Yangyang; Zhao, Juan; Yan, Beibei; Wang, Yancai
2018-04-19
Freeze-drying is an effective way to improve long-term physical stability of nanosuspension in drug delivery applications. Nanosuspension also known as suspension of nanoparticles. In this study, the effect of freeze-drying with different cryoprotectants on the physicochemical characteristics of resveratrol (RSV) nanosuspension and quercetin (QUE) nanosuspension was evaluated. D-α-tocopheryl polyethylene glycol succinate (TPGS) and folate-modified distearoylphosphatidyl ethanolamine-polyethylene glycol (DSPE-PEG-FA) were selected as functional stabilisers formulated nanosuspension which were prepared by anti-solvent precipitation method. RSV nanoparticle size and QUE nanoparticle size were about 210 and 110 nm, respectively. The AFM and TEM results of nanosuspension showed uniform and irregular shape particles. After freeze-drying, the optimal concentration of four cryoprotectants was determined by the particle size of re-dispersed nanoparticles. The dissolution profile of drug nanoparticle significantly showed approximately at a 6-8-fold increase dissolution rate. Moreover, TPGS and DSPE-PEG-FA stabilised RSV nanosuspension and QUE nanosuspension samples showed better effect on long-term physical stability.
Diel Variations in Optical Properties of Micromonas pusilla, a Prasinophyte
NASA Technical Reports Server (NTRS)
DuRand, Michele D.; Green, Rebecca E.; Sosik, Heidi M.; Olson, Robert J.
2001-01-01
A laboratory experiment was conducted on cultures of Micromonas pusilla, a marine prasinophyte, to investigate how cell growth and division affect the optical properties over the light:dark cycle. Measurements were made of cell size and concentration, attenuation and absorption coefficients, flow cytometric light scattering (in forward and side directions), chlorophyll and carbon content. Refractive index was calculated using the anomalous diffraction approximation Cells were about 1.5 micrometers in diameter and exhibited phased division, with the major division burst occurring during the night. Typical diel variations were observed, with cells increasing in size and light scattering during the day as they photosynthesize and decreasing at night upon division. The cells were in ultradian growth, with more than one division per day, at a light level of 120 Mu-mol photons m/sq/sec. Since these cells are similar in size to small phytoplankton that are typically abundant in field samples, these results can be used in the interpretation of diel variations in light scattering in natural populations of phytoplankton.
NASA Astrophysics Data System (ADS)
Masuda, Toshiaki; Miyake, Tomoya; Kimura, Nozomi; Okamoto, Atsushi
2011-01-01
Microboudinage structures developed within glaucophane are found in the calcite matrix of blueschist-facies impure marbles from Syros, Greece. The presence of these structures enables the successful application of the microboudin method for palaeodifferential stress analysis, which was originally developed for rocks with a quartzose matrix. Application of the microboudin method reveals that differential stress increased during exhumation of the marble; the estimated maximum palaeodifferential stress values are approximately 9-15 MPa, an order of magnitude lower than the values estimated using the calcite-twin palaeopiezometer. This discrepancy reflects the fact that the two methods assess differential stress at different stages in the deformation history. Differential stresses in the Syros samples estimated using three existing equations for grain-size palaeopiezometry show a high degree of scatter, and no reliable results were obtained by a comparison between the results of the microboudin method and grain-size palaeopiezometry.
Rate of dehydration of corn (Zea mays L.) pollen in the air.
Aylor, Donald E
2003-10-01
The water content of corn (Zea mays L.) pollen directly affects its dispersal in the atmosphere through its effect on settling speed and viability. Therefore, the rate of water loss from pollen after being shed from the anther is an important component of a model to predict effective pollen transport distances in the atmosphere. The rate of water loss from corn pollen in air was determined using two methods: (1) by direct weighing of samples containing approximately 5 x 10(4) grains, and (2) by microscopic measurement of the change in size of individual grains. The conductance of the pollen wall to water loss was derived from the time rate of change of pollen mass or pollen grain size. The two methods gave average conductance values of 0.026 and 0.027 cm s-1, respectively. In other experiments, the water potential, psi, of corn pollen was determined at various values of relative water content (dry weight basis), either by using a thermocouple psychrometer or by allowing samples of pollen to come to vapour equilibrium with various saturated salt solutions. Non-linear regression analysis of the data yielded psi (MPa) = -3.218 theta(-1.35) (r2 = 0.94; for -298 < or = psi < or = -1 MPa). This result was incorporated into a model differential equation for the rate of water loss from pollen. The model agreed well (r2 approximately 0.98) with the observed time-course of the decrease of water content of pollen grains exposed to a range of temperature and humidity conditions.
Vasylkiv, Oleg; Borodianska, Hanna; Badica, Petre; Zhen, Yongda; Tok, Alfred
2009-01-01
Four-cation nanograined strontium and magnesium doped lanthanum gallate (La0.8Sr0.2) (Ga0.9Mg0.1)O(3-delta) (LSGM) and its composite with 2 wt% of ceria (LSGM-Ce) were prepared. Morphologically homogeneous nanoreactors, i.e., complex intermediate metastable aggregates of desired composition were assembled by spray atomization technique, and subsequently loaded with nanoparticles of highly energetic C3H6N6O6. Rapid nanoblast calcination technique was applied and the final composition was synthesized within the preliminary localized volumes of each single nanoreactor on the first step of spark plasma treatment. Subsequent SPS consolidations of nanostructured extremely active LSGM and LSGM-Ce powders were achieved by rapid treatment under pressures of 90-110 MPa. This technique provided the heredity of the final structure of nanosize multimetal oxide, allowed the prevention of the uncontrolled agglomeration during multicomponent aggregates assembling, subsequent nanoblast calcination, and final ultra-rapid low-temperature SPS consolidation of nanostructured ceramics. LaSrGaMgCeO(3-delta) nanocrystalline powder consisting of approximately 11 nm crystallites was consolidated to LSGM-Ce nanoceramic with average grain size of approximately 14 nm by low-temperature SPS at 1250 degrees C. Our preliminary results indicate that nanostructured samples of (La0.8Sr0.2)(Ga0.9Mg0.1)O(3-delta) with 2 wt% of ceria composed of approximataley 14 nm grains can exhibit giant magnetoresistive effect in contrast to the usual paramagnetic properties measured on the samples with larger grain size.
Precise Distances for Main-belt Asteroids in Only Two Nights
NASA Astrophysics Data System (ADS)
Heinze, Aren N.; Metchev, Stanimir
2015-10-01
We present a method for calculating precise distances to asteroids using only two nights of data from a single location—far too little for an orbit—by exploiting the angular reflex motion of the asteroids due to Earth’s axial rotation. We refer to this as the rotational reflex velocity method. While the concept is simple and well-known, it has not been previously exploited for surveys of main belt asteroids (MBAs). We offer a mathematical development, estimates of the errors of the approximation, and a demonstration using a sample of 197 asteroids observed for two nights with a small, 0.9-m telescope. This demonstration used digital tracking to enhance detection sensitivity for faint asteroids, but our distance determination works with any detection method. Forty-eight asteroids in our sample had known orbits prior to our observations, and for these we demonstrate a mean fractional error of only 1.6% between the distances we calculate and those given in ephemerides from the Minor Planet Center. In contrast to our two-night results, distance determination by fitting approximate orbits requires observations spanning 7-10 nights. Once an asteroid’s distance is known, its absolute magnitude and size (given a statistically estimated albedo) may immediately be calculated. Our method will therefore greatly enhance the efficiency with which 4m and larger telescopes can probe the size distribution of small (e.g., 100 m) MBAs. This distribution remains poorly known, yet encodes information about the collisional evolution of the asteroid belt—and hence the history of the Solar System.
Forward multiple scattering corrections as function of detector field of view
NASA Astrophysics Data System (ADS)
Zardecki, A.; Deepak, A.
1983-06-01
The theoretical formulations are given for an approximate method based on the solution of the radiative transfer equation in the small angle approximation. The method is approximate in the sense that an approximation is made in addition to the small angle approximation. Numerical results were obtained for multiple scattering effects as functions of the detector field of view, as well as the size of the detector's aperture for three different values of the optical depth tau (=1.0, 4.0 and 10.0). Three cases of aperture size were considered--namely, equal to or smaller or larger than the laser beam diameter. The contrast between the on-axis intensity and the received power for the last three cases is clearly evident.
NASA Astrophysics Data System (ADS)
Goltz, Douglas; Boileau, Michael; Plews, Ian; Charleton, Kimberly; Hinds, Michael W.
2006-07-01
Spark ablation or electric dispersion of metal samples in aqueous solution can be a useful approach for sample preparation. The ablated metal forms a stable suspension that has been described as colloidal, which is easily dissolved with a small amount of concentrated (16 M) HNO 3. In this study, we have examined some of the properties of the spark ablation process for a variety of metals (Rh and Au) and alloys (stainless steel) using a low power spark (100-300 W). Particle size distributions and conductivity measurements were carried out on selected metals to characterize the stable suspensions. A LASER diffraction particle size analyzer was useful for showing that ablated particles varied in size from 1 to 30 μm for both the silver and the nickel alloy, Inconel. In terms of weight percent most of the particles were between 10 and 30 μm. Conductivity of the spark ablation solution was found to increase linearly for approximately 3 min before leveling off at approximately 300 S cm 3. These measurements suggest that a significant portion of the ablated metal is also ionic in nature. Scanning electron microscope measurements revealed that a low power spark is much less damaging to the metal surface than a high power spark. Crater formation of the low power spark was found in a wider area than expected with the highest concentration where the spark was directed. The feasibility of using spark ablation for metal dissolution of a valuable artifact such as gold was also performed. Determinations of Ag (4-12%) and Cu (1-3%) in Bullion Reference Material (BRM) gave results that were in very good agreement with the certified values. The precision was ± 0.27% for Ag at 4.15% (RSD = 6.5%) and ± 0.09% for Cu at 1% (RSD = 9.0%).
Proton-Induced X-Ray Emission Analysis of Crematorium Emissions
NASA Astrophysics Data System (ADS)
Ali, Salina; Nadareski, Benjamin; Safiq, Alexandrea; Smith, Jeremy; Yoskowitz, Josh; Labrake, Scott; Vineyard, Michael
2013-10-01
There has been considerable concern in recent years about possible mercury emissions from crematoria. We have performed a particle-induced X-ray emission (PIXE) analysis of atmospheric aerosol samples collected on the roof of the crematorium at Vale Cemetery in Schenectady, NY, to address this concern. The samples were collected with a nine-stage cascade impactor that separates the particulate matter according to particle size. The aerosol samples were bombarded with 2.2-MeV protons from the Union College 1.1-MV Pelletron Accelerator. The emitted X-rays were detected with a silicon drift detector and the X-ray energy spectra were analyzed using GUPIX software to determine the elemental concentrations. We measured significant concentrations of sulfur, phosphorus, potassium, calcium, and iron, but essentially no mercury. The lower limit of detection for mercury in this experiment was approximately 0.2 ng/m3. We will describe the experimental procedure, discuss the PIXE analysis, and present preliminary results.
Cosca, Michael; Stunitz, Holger; Bourgiex, Anne-Lise; Lee, John P.
2011-01-01
The effects of deformation on radiogenic argon (40Ar*) retentivity in mica are described from high pressure experiments performed on rock samples of peraluminous granite containing euhedral muscovite and biotite. Cylindrical cores, ~15 mm in length and 6.25 mm in diameter, were drilled from granite collected from the South Armorican Massif in northwestern France, loaded into gold capsules, and weld-sealed in the presence of excess water. The samples were deformed at a pressure of 10 kb and a temperature of 600 degrees C over a period 29 of hours within a solid medium assembly in a Griggs-type triaxial hydraulic deformation apparatus. Overall shortening in the experiments was approximately 10%. Transmitted light and secondary and backscattered electron imaging of the deformed granite samples reveals evidence of induced defects and for significant physical grain size reduction by kinking, cracking, and grain segmentation of the micas.
Rheology of water ices V and VI
Durham, W.B.; Stern, L.A.; Kirby, S.H.
1996-01-01
We have measured the mechanical strength (??) of pure water ices V and VI under steady state deformation conditions. Constant displacement rate compressional tests were conducted in a gas apparatus at confining pressures from 400 250 K. Ices V and VI are thus Theologically distinct but by coincidence have approximately the same strength under the conditions chosen for these experiments. To avoid misidentification, these tests are therefore accompanied by careful observations of the occurrences and characteristics of phase changes. One sample each of ice V and VI was quenched at pressure to metastably retain the high-pressure phase and the acquired deformation microstructures; X ray diffraction analysis of these samples confirmed the phase identification. Surface replicas of the deformed and quenched samples suggest that ice V probably deforms largely by dislocation creep, while ice VI deforms by a more complicated process involving substantial grain size reduction through recrystallization.
Topin, Sylvain; Greau, Claire; Deliere, Ludovic; Hovesepian, Alexandre; Taffary, Thomas; Le Petit, Gilbert; Douysset, Guilhem; Moulin, Christophe
2015-11-01
The SPALAX (Système de Prélèvement Automatique en Ligne avec l'Analyse du Xénon) is one of the systems used in the International Monitoring System of the Comprehensive Nuclear Test Ban Treaty (CTBT) to detect radioactive xenon releases following a nuclear explosion. Approximately 10 years after the industrialization of the first system, the CEA has developed the SPALAX New Generation, SPALAX-NG, with the aim of increasing the global sensitivity and reducing the overall size of the system. A major breakthrough has been obtained by improving the sampling stage and the purification/concentration stage. The sampling stage evolution consists of increasing the sampling capacity and improving the gas treatment efficiency across new permeation membranes, leading to an increase in the xenon production capacity by a factor of 2-3. The purification/concentration stage evolution consists of using a new adsorbent Ag@ZSM-5 (or Ag-PZ2-25) with a much larger xenon retention capacity than activated charcoal, enabling a significant reduction in the overall size of this stage. The energy consumption of the system is similar to that of the current SPALAX system. The SPALAX-NG process is able to produce samples of almost 7 cm(3) of xenon every 12 h, making it the most productive xenon process among the IMS systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Microstructure modification and oxygen mobility of CeZrO2 nanocrystal doped with Y and Fe metals
NASA Astrophysics Data System (ADS)
Hadi, A.; Shah, M. N. A.; Ismail, K. N.; Roslan, A. B.
2017-09-01
CeZrO2-nanocrystals doped with Y3+ and Fe2+ have been successfully synthesized using the microemulsion method. In this study, the synergetic effect of the synthesis parameters on the formation of structure and catalytic property were reported. XRD analysis found that both doped samples reveal the symmetrical cubic phase and mix in a homogeneous solid solution. The crystallite size of both samples was in nanoscale, which were 11 and 15 nm for CZF and CZY, respectively. This finding was consistent with the physical size investigated by TEM, which was approximately 10 nm for both samples. Meanwhile, isotherm profiles for both samples appeared as the type IV group in the IUPAC classification which was the mesoporous materials. CZY crystal had higher BET surface area than CZF crystal, which was 141.30 and 135.5 m2/g, respectively. Oxygen mobility study found that CZF crystal promotes to active at lower temperature, which is 274.2˚C, than CZY at 302.7˚C. CZF crystal also revealed the higher oxygen removal, which was 286.35 mmol/g, than CZY was 250.49 mmol/g. The doping of transition metal, Fe into CeZrO2 tended to reduce the oxygen mobility's activation temperature, while the addition of rare earth metal, Y into CeZrO2 leaded to remarkable increase of surface area.
NASA Technical Reports Server (NTRS)
Smith, Andrew M.; Davis, Robert Ben; LaVerde, Bruce T.; Jones, Douglas C.; Band, Jonathon L.
2012-01-01
Using the patch method to represent the continuous spatial correlation function of a phased pressure field over a structural surface is an approximation. The approximation approaches the continuous function as patches become smaller. Plotting comparisons of the approximation vs the continuous function may provide insight revealing: (1) For what patch size/density should the approximation be very good? (2) What the approximation looks like when it begins to break down? (3) What the approximation looks like when the patch size is grossly too large. Following these observations with a convergence study using one FEM may allow us to see the importance of patch density. We may develop insights that help us to predict sufficient patch density to provide adequate convergence for the intended purpose frequency range of interest
NASA Technical Reports Server (NTRS)
Lehmer, B. D.; Berkeley, M.; Zezas, A.; Alexander, D. M.; Basu-Zych, A.; Bauer, F. E.; Brandt, W. N.; Fragos, T.; Hornschemeier, A. E.; Kalogera, V.;
2014-01-01
We present direct constraints on how the formation of low-mass X-ray binary (LMXB) populations in galactic fields depends on stellar age. In this pilot study, we utilize Chandra and Hubble Space Telescope (HST) data to detect and characterize the X-ray point source populations of three nearby early-type galaxies: NGC 3115, 3379, and 3384. The luminosity-weighted stellar ages of our sample span approximately equal to 3-10 Gyr. X-ray binary population synthesis models predict that the field LMXBs associated with younger stellar populations should be more numerous and luminous per unit stellar mass than older populations due to the evolution of LMXB donor star masses. Crucially, the combination of deep Chandra and HST observations allows us to test directly this prediction by identifying and removing counterparts to X-ray point sources that are unrelated to the field LMXB populations, including LMXBs that are formed dynamically in globular clusters, Galactic stars, and background AGN/galaxies. We find that the "young" early-type galaxy NGC 3384 (approximately equals 2-5 Gyr) has an excess of luminous field LMXBs (L(sub x) approximately greater than (5-10) × 10(exp 37) erg s(exp -1)) per unit K-band luminosity (L(sub K); a proxy for stellar mass) than the "old" early-type galaxies NGC 3115 and 3379 (approximately equals 8-10 Gyr), which results in a factor of 2-3 excess of L(sub X)/L(sub K) for NGC 3384. This result is consistent with the X-ray binary population synthesis model predictions; however, our small galaxy sample size does not allow us to draw definitive conclusions on the evolution field LMXBs in general. We discuss how future surveys of larger galaxy samples that combine deep Chandra and HST data could provide a powerful new benchmark for calibrating X-ray binary population synthesis models.
Warger, William C.; Hostens, Jeroen; Namati, Eman; Birngruber, Reginald; Bouma, Brett E.; Tearney, Guillermo J.
2012-01-01
Abstract. Optical coherence tomography (OCT) has been increasingly used for imaging pulmonary alveoli. Only a few studies, however, have quantified individual alveolar areas, and the validity of alveolar volumes represented within OCT images has not been shown. To validate quantitative measurements of alveoli from OCT images, we compared the cross-sectional area, perimeter, volume, and surface area of matched subpleural alveoli from microcomputed tomography (micro-CT) and OCT images of fixed air-filled swine samples. The relative change in size between different alveoli was extremely well correlated (r>0.9, P<0.0001), but OCT images underestimated absolute sizes compared to micro-CT by 27% (area), 7% (perimeter), 46% (volume), and 25% (surface area) on average. We hypothesized that the differences resulted from refraction at the tissue–air interfaces and developed a ray-tracing model that approximates the reconstructed alveolar size within OCT images. Using this model and OCT measurements of the refractive index for lung tissue (1.41 for fresh, 1.53 for fixed), we derived equations to obtain absolute size measurements of superellipse and circular alveoli with the use of predictive correction factors. These methods and results should enable the quantification of alveolar sizes from OCT images in vivo. PMID:23235834
NASA Astrophysics Data System (ADS)
Unglert, Carolin I.; Warger, William C.; Hostens, Jeroen; Namati, Eman; Birngruber, Reginald; Bouma, Brett E.; Tearney, Guillermo J.
2012-12-01
Optical coherence tomography (OCT) has been increasingly used for imaging pulmonary alveoli. Only a few studies, however, have quantified individual alveolar areas, and the validity of alveolar volumes represented within OCT images has not been shown. To validate quantitative measurements of alveoli from OCT images, we compared the cross-sectional area, perimeter, volume, and surface area of matched subpleural alveoli from microcomputed tomography (micro-CT) and OCT images of fixed air-filled swine samples. The relative change in size between different alveoli was extremely well correlated (r>0.9, P<0.0001), but OCT images underestimated absolute sizes compared to micro-CT by 27% (area), 7% (perimeter), 46% (volume), and 25% (surface area) on average. We hypothesized that the differences resulted from refraction at the tissue-air interfaces and developed a ray-tracing model that approximates the reconstructed alveolar size within OCT images. Using this model and OCT measurements of the refractive index for lung tissue (1.41 for fresh, 1.53 for fixed), we derived equations to obtain absolute size measurements of superellipse and circular alveoli with the use of predictive correction factors. These methods and results should enable the quantification of alveolar sizes from OCT images in vivo.
3-D breast anthropometry of plus-sized women in South Africa.
Pandarum, Reena; Yu, Winnie; Hunter, Lawrance
2011-09-01
Exploratory retail studies in South Africa indicate that plus-sized women experience problems and dissatisfaction with poorly fitting bras. The lack of 3-D anthropometric studies for the plus-size women's bra market initiated this research. 3-D body torso measurements were collected from a convenience sample of 176 plus-sized women in South Africa. 3-D breast measurements extracted from the TC(2) NX12-3-D body scanner 'breast module' software were compared with traditional tape measurements. Regression equations show that the two methods of measurement were highly correlated although, on average, the bra cup size determining factor 'bust minus underbust' obtained from the 3-D method is approximately 11% smaller than that of the manual method. It was concluded that the total bust volume correlated with the quadrant volume (r = 0.81), cup length, bust length and bust prominence, should be selected as the overall measure of bust size and not the traditional bust girth and the underbust measurement. STATEMENT OF RELEVANCE: This study contributes new data and adds to the knowledge base of anthropometry and consumer ergonomics on bra fit and support, published in this, the Ergonomics Journal, by Chen et al. (2010) on bra fit and White et al. (2009) on breast support during overground running.
Eduardoff, Mayra; Xavier, Catarina; Strobl, Christina; Casas-Vargas, Andrea; Parson, Walther
2017-01-01
The analysis of mitochondrial DNA (mtDNA) has proven useful in forensic genetics and ancient DNA (aDNA) studies, where specimens are often highly compromised and DNA quality and quantity are low. In forensic genetics, the mtDNA control region (CR) is commonly sequenced using established Sanger-type Sequencing (STS) protocols involving fragment sizes down to approximately 150 base pairs (bp). Recent developments include Massively Parallel Sequencing (MPS) of (multiplex) PCR-generated libraries using the same amplicon sizes. Molecular genetic studies on archaeological remains that harbor more degraded aDNA have pioneered alternative approaches to target mtDNA, such as capture hybridization and primer extension capture (PEC) methods followed by MPS. These assays target smaller mtDNA fragment sizes (down to 50 bp or less), and have proven to be substantially more successful in obtaining useful mtDNA sequences from these samples compared to electrophoretic methods. Here, we present the modification and optimization of a PEC method, earlier developed for sequencing the Neanderthal mitochondrial genome, with forensic applications in mind. Our approach was designed for a more sensitive enrichment of the mtDNA CR in a single tube assay and short laboratory turnaround times, thus complying with forensic practices. We characterized the method using sheared, high quantity mtDNA (six samples), and tested challenging forensic samples (n = 2) as well as compromised solid tissue samples (n = 15) up to 8 kyrs of age. The PEC MPS method produced reliable and plausible mtDNA haplotypes that were useful in the forensic context. It yielded plausible data in samples that did not provide results with STS and other MPS techniques. We addressed the issue of contamination by including four generations of negative controls, and discuss the results in the forensic context. We finally offer perspectives for future research to enable the validation and accreditation of the PEC MPS method for final implementation in forensic genetic laboratories. PMID:28934125
Deformation behaviour of Rheocast A356 Al alloy at microlevel considering approximated RVEs
NASA Astrophysics Data System (ADS)
Islam, Sk. Tanbir; Das, Prosenjit; Das, Santanu
2015-03-01
A micromechanical approach is considered here to predict the deformation behaviour of Rheocast A356 (Al-Si-Mg) alloy. Two representative volume elements (RVEs) are modelled in the finite element (FE) framework. Two dimensional approximated microstructures are generated assuming elliptic grains, based on the grain size, shape factor and area fraction of the primary Al phase of the said alloy at different processing condition. Plastic instability is shown using stress and strain distribution between the Al rich primary and Si rich eutectic phases under different boundary conditions. Boundary conditions are applied on the approximated RVEs in such a manner, so that they represent the real life situation depending on their position on a cylindrical tensile test sample. FE analysis is carried out using commercial finite element code ABAQUS without specifying any damage or failure criteria. Micro-level in-homogeneity leads to incompatible deformation between the constituent phases of the rheocast alloy and steers plastic strain localisation. Plastic stain localised regions within the RVEs are predicted as the favourable sites for void nucleation. Subsequent growth of nucleated voids leads to final failure of the materials under investigation.
Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.
Fang, Hongyan; Zhang, Hong; Yang, Yaning
2016-07-01
Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.
NASA Astrophysics Data System (ADS)
Cahill, A.; Jakobsen, R.
2012-04-01
In order to assess the environmental implications of leakage of CO2 from a geological sequestration site into overlying shallow potable aquifers, a 3 month field release experiment is planned to commence in spring 2012 at Vrøgum plantation, Western Denmark. To test the injection and sampling methodologies and as a study of short term effects, a pilot experiment was conducted at the field site: 45 kg of food grade CO2 was injected at 10 m depth over 48 hours into an unconfined, aeolian/glacial sand aquifer and the effects on water chemistry studied. The CO2 was injected through an inclined well installed with a 1 m length of porous polyethylene well screen (20 µm pore size) initially at a rate of 5 litres per minute increasing to 10 litres per minute after 24 hours. Water samples were taken from a network of multi-level sample points (8, 4 and 2.4m depth) before, during and after the injection and measured for physico-chemical parameters and major/trace element composition. Although the site possesses a relatively high hydraulic conductivity (12-16 m/day), due to the small hydraulic gradient (0.0039) 6 days elapsed before effects of CO2 on the ground water were detected in the first sampling point located 0.5 m down flow from the injection well. The dissolved plume of CO2 was observed only in the 8 m depth sample points and moved with flow (approximately 0.10 - 0.12 m/day). The plume spread laterally to 2m width as little as 1 m from the injection screen after 26 days, indicating that CO2 bubbles change the hydraulics of the medium. Dissolved CO2 was not detected in sample points at 4 or 2.4 m depth at any time during the experiment, suggesting gas could not move into the slightly finer grained upper sand. Effects of CO2 dissolution at 8 m depth were manifest as a clear and stable increase in electrical conductivity (approximately 160 to 300 µS/cm), a relatively small increase in total dissolved ions (approximately 30 to 50 mg/l) and an unstable depression of pH (approximately 5.8 to 4.73). The dissolved CO2 plume evolved with a distinct maximal front observed to pass through sample points followed by a slowly dissipating tail. After 56 days the CO2 plume reached the end of the monitoring network and was at its greatest extent (5 m length by 1 m width) however still appeared to be increasing in size suggesting residual gas phase CO2 trapped within the pore space continuously dissolving. Water quality did not significantly deteriorate and only small increases in major and trace elements were observed. Overall, groundwater chemistry results indicate that for an aquifer composed primarily of slowly reacting silicate sediments, such as Vrøgum, the risks to water resources from a short term leak from CCS into shallow overlying aquifers are minimal. However, a potential accumulation effect within the plume front as it moves through the formation was observed inferring a large scale leak may develop a CO2 charged plume exceeding guideline values for major and trace elements.
Samuvel, K; Ramachandran, K
2015-07-05
This study examined the effects of the combination of starting materials on the properties of solid-state reacted BaTiO3 using two different types of BaCO3 and TiO2. In addition, the effect of mechanochemical activation by high energy milling and the Ba/Ti molar ratio on the reaction temperature, particle size and tetragonality were investigated. The TiO2 phase and size plays a major role in increasing the reaction temperature and particle size. With the optimum selection of starting materials and processing conditions, BaTiO3 with a particle size <200 nm (Scherrer's formula) and a tetragonality c/a of approximately 1.007 was obtained. Broadband dielectric spectroscopy is applied to investigate the electrical properties of disordered perovskite-like ceramics in a wide temperature range. From the X-ray diffraction analysis it was found that the newly obtained BaTi0.5Fe0.5O3 ceramics consist of two chemically different phases. The electric modulus M∗ formalism used in the analysis enabled us to distinguish and separate the relaxation processes, dominated by marked conductivity in the ε∗(ω) representation. Interfacial effects on the dielectric properties of the samples have been understood by Cole-Cole plots in complex impedance and modulus formalism. Modulus formalism has identified the effects of both grain and grain boundary microstructure on the dielectric properties, particularly in solid state routed samples. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Samuvel, K.; Ramachandran, K.
2015-07-01
This study examined the effects of the combination of starting materials on the properties of solid-state reacted BaTiO3 using two different types of BaCO3 and TiO2. In addition, the effect of mechanochemical activation by high energy milling and the Ba/Ti molar ratio on the reaction temperature, particle size and tetragonality were investigated. The TiO2 phase and size plays a major role in increasing the reaction temperature and particle size. With the optimum selection of starting materials and processing conditions, BaTiO3 with a particle size <200 nm (Scherrer's formula) and a tetragonality c/a of approximately 1.007 was obtained. Broadband dielectric spectroscopy is applied to investigate the electrical properties of disordered perovskite-like ceramics in a wide temperature range. From the X-ray diffraction analysis it was found that the newly obtained BaTi0.5Fe0.5O3 ceramics consist of two chemically different phases. The electric modulus M∗ formalism used in the analysis enabled us to distinguish and separate the relaxation processes, dominated by marked conductivity in the ε∗(ω) representation. Interfacial effects on the dielectric properties of the samples have been understood by Cole-Cole plots in complex impedance and modulus formalism. Modulus formalism has identified the effects of both grain and grain boundary microstructure on the dielectric properties, particularly in solid state routed samples.
Grosskurth, H; Mosha, F; Todd, J; Senkoro, K; Newell, J; Klokke, A; Changalucha, J; West, B; Mayaud, P; Gavyole, A
1995-08-01
To determine baseline HIV prevalence in a trial of improved sexually transmitted disease (STD) treatment, and to investigate risk factors for HIV. To assess comparability of intervention and comparison communities with respect to HIV/STD prevalence and risk factors. To assess adequacy of sample size. Twelve communities in Mwanza Region, Tanzania: one matched pair of roadside communities, four pairs of rural communities, and one pair of island communities. One community from each pair was randomly allocated to receive the STD intervention following the baseline survey. Approximately 1000 adults aged 15-54 years were randomly sampled from each community. Subjects were interviewed, and HIV and syphilis serology performed. Men with a positive leucocyte esterase dipstick test on urine, or reporting a current STD, were tested for urethral infections. A total of 12,534 adults were enrolled. Baseline HIV prevalences were 7.7% (roadside), 3.8% (rural) and 1.8% (islands). Associations were observed with marital status, injections, education, travel, history of STD and syphilis serology. Prevalence was higher in circumcised men, but not significantly after adjusting for confounders. Intervention and comparison communities were similar in the prevalence of HIV (3.8 versus 4.4%), active syphilis (8.7 versus 8.2%), and most recorded risk factors. Within-pair variability in HIV prevalence was close to the value assumed for sample size calculations. The trial cohort was successfully established. Comparability of intervention and comparison communities at baseline was confirmed for most factors. Matching appears to have achieved a trial of adequate sample size. The apparent lack of a protective effect of male circumcision contrasts with other studies in Africa.
Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A
2016-01-01
Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.
The Influence of Alumina Properties on its Dissolution in Smelting Electrolyte
NASA Astrophysics Data System (ADS)
Bagshaw, A. N.; Welch, B. J.
The dissolution of a wide range of commercially produced aluminas in modified cryolite bath was studied on a laboratory scale. Most of the aluminas were products of conventional refineries and smelter dry scrubbing systems; a few were produced in laboratory and pilot calciners, enabling greater flexibility in the calcination process and the final properties. The mode of alumina feeding and the size of addition approximated to the point feeder situation. Alpha-alumina content, B.E.T. surface area and median particle size had little impact on dissolution behaviour. The volatiles content, expressed as L.O.I., the morphology of the original hydrate and the mode of calcination had the most influence. Discrete intermediate oxide phases were identified in all samples; delta-alumina content impacted most on dissolution. The flow properties of an alumina affected its overall dissolution.
Laser diffraction of acicular particles: practical applications
NASA Astrophysics Data System (ADS)
Scott, David M.; Matsuyama, Tatsushi
2014-08-01
Commercial laser diffraction instruments are widely used to measure particle size distribution (PSD), but the results are distorted for non-spherical (acicular) particles often encountered in practical applications. Consequently the distribution, which is reported in terms of equivalent spherical diameter, requires interpretation. For rod-like and plate-like particles, the PSD tends to be bi-modal, with the two modal sizes closely related to the median length and width, or width and thickness, of the particles. Furthermore, it is found that the bi-modal PSD for at least one instrument can typically be approximated by a bi-lognormal distribution. By fitting such a function to the reported distribution, one may extract quantitative information useful for process or product development. This approach is illustrated by examples of such measurement on industrial samples of polymer particles, crystals, bacteria, and clays.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
Brief report: a preliminary study of fetal head circumference growth in autism spectrum disorder.
Whitehouse, Andrew J O; Hickey, Martha; Stanley, Fiona J; Newnham, John P; Pennell, Craig E
2011-01-01
Fetal head circumference (HC) growth was examined prospectively in children with autism spectrum disorder (ASD). ASD participants (N = 14) were each matched with four control participants (N = 56) on a range of parameters known to influence fetal growth. HC was measured using ultrasonography at approximately 18 weeks gestation and again at birth using a paper tape-measure. Overall body size was indexed by fetal femur-length and birth length. There was no between-groups difference in head circumference at either time-point. While a small number of children with ASD had disproportionately large head circumference relative to body size at both time-points, the between-groups difference did not reach statistical significance in this small sample. These preliminary findings suggest that further investigation of fetal growth in ASD is warranted.
Chen, Hua; Chen, Kun
2013-01-01
The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n − An(t) follows a Poisson distribution, and as m → n, n(n−1)Tm/2N(0) follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference. PMID:23666939
Chen, Hua; Chen, Kun
2013-07-01
The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.
Dehydration and Denitrification in the Arctic Polar Vortex During the 1995-1996 Winter
NASA Technical Reports Server (NTRS)
Hintsa, E. J.; Newman, P. A.; Jonsson, H. H.; Webster, C. R.; May, R. D.; Herman, R. L.; Lait, L. R.; Schoeberl, M. R.; Elkins, J. W.; Wamsley, P. R.;
1998-01-01
Dehydration of more than 0.5 ppmv water was observed between 18 and 19 km (theta approximately 450-465 K) at the edge of the Arctic polar vortex on February 1, 1996. More than half the reactive nitrogen (NO(y)) had also been removed, with layers of enhanced NO(y) at lower altitudes. Back trajectory calculations show that air parcels sampled inside the vortex had experienced temperatures as low as 188 K within the previous 12 days, consistent with a small amount of dehydration. The depth of the dehydrated layer (approximately 1 km) and the fact that trajectories passed through the region of ice saturation in one day imply selective growth of a small fraction of particles to sizes large enough (>10 micrometers) to be irreversibly removed on this timescale. Over 25% of the Arctic vortex in a 20-30 K range Transport of theta is estimated to have been dehydrated in this event.
Decontamination effect of milling by a jet mill on bacteria in rice flour.
Sotome, Itaru; Nei, Daisuke; Tsuda, Masuko; Mohammed, Sharif Hossen; Takenaka, Makiko; Okadome, Hiroshi; Isobe, Seiichiro
2011-06-01
The decontamination effect of milling by a jet mill was investigated by counting the number of bacteria in brown and white rice flour with mean particle diameters of 3, 20, and 40µm prepared by the jet mill. In the jet mill, the particles are crushed and reduced in size by the mechanical impact caused by their collision. Although the brown and white rice grains were contaminated with approximately 10(6) and 10(5) CFU/g bacteria, the microbial load of the rice flour decreased as the mean particle diameter decreased, ultimately decreasing to approximately 104 and 103 CFU/g in the brown and white rice flour. The temperature and pressure changes of the sample were not considered to have an effect on reducing the bacterial count during the milling. Hence, it was thought that the rice flour was decontaminated by other effects.
Radar investigation of asteroids
NASA Technical Reports Server (NTRS)
Ostro, S. J.
1983-01-01
For 80 Sappho, 356 Liguria, 694 Ekard, and 2340 Hathor, data were taken simultaneously in the same sense of circular polarization as transmitted (SC) as well as in the opposite (OC) sense. Graphs show the average OC and SC radar echo power spectra soothed to a resolution of EFB Hz and plotted against Doppler frequency. Radar observations of the peculiar object 2201 Oljato reveal an unusual set of echo power spectra. The albedo and polarization ratio remain fairly constant but the bandwidths range from approximately 0.8 Hz to 1.4 Hz and the spectral shapes vary dramatically. Echo characteristics within any one date's approximately 2.5-hr observation period do not fluctuate very much. Laboratory measurements of the radar frequency electrical properties of particulate metal-plus-silicate mixtures can be combined with radar albedo estimates to constrain the bulk density and metal weight, fraction in a hypothetical asteroid regolith having the same particle size distribution as lab samples.
Emergency in-flight egress opening for general aviation aircraft
NASA Technical Reports Server (NTRS)
Bement, L. J.
1980-01-01
In support of a stall/spin research program, an emergency in-flight egress system is being installed in a light general aviation airplane. To avoid a major structural redesign for a mechanical door, an add-on 11.2 kg pyrotechnic-actuated system was developed to create an opening in the existing structure. The airplane skin will be explosively severed around the side window, across a central stringer, and down to the floor, creating an opening of approximately 76 by 76 cm. The severed panel will be jettisoned at an initial velocity of approximately 13.7 m/sec. System development included a total of 68 explosive severance tests on aluminum material using small samples, small and full scale flat panel aircraft structural mock-ups, and an actual aircraft fuselage. These tests proved explosive sizing/severance margins, explosive initiation, explosive product containment, and system dynamics.
Nondestructive ultrasonic characterization of armor grade silicon carbide
NASA Astrophysics Data System (ADS)
Portune, Andrew Richard
Ceramic materials have traditionally been chosen for armor applications for their superior mechanical properties and low densities. At high strain rates seen during ballistic events, the behavior of these materials relies upon the total volumetric flaw concentration more so than any single anomalous flaw. In this context flaws can be defined as any microstructural feature which detriments the performance of the material, potentially including secondary phases, pores, or unreacted sintering additives. Predicting the performance of armor grade ceramic materials depends on knowledge of the absolute and relative concentration and size distribution of bulk heterogeneities. Ultrasound was chosen as a nondestructive technique for characterizing the microstructure of dense silicon carbide ceramics. Acoustic waves interact elastically with grains and inclusions in large sample volumes, and were well suited to determine concentration and size distribution variations for solid inclusions. Methodology was developed for rapid acquisition and analysis of attenuation coefficient spectra. Measurements were conducted at individual points and over large sample areas using a novel technique entitled scanning acoustic spectroscopy. Loss spectra were split into absorption and scattering dominant frequency regimes to simplify analysis. The primary absorption mechanism in polycrystalline silicon carbide was identified as thermoelastic in nature. Correlations between microstructural conditions and parameters within the absorption equation were established through study of commercial and custom engineered SiC materials. Nonlinear least squares regression analysis was used to estimate the size distributions of boron carbide and carbon inclusions within commercial SiC materials. This technique was shown to additionally be capable of approximating grain size distributions in engineered SiC materials which did not contain solid inclusions. Comparisons to results from electron microscopy exhibited favorable agreement between predicted and observed distributions. Developed techniques were applied to large sample areas using scanning acoustic spectroscopy to map variations in the size distribution and concentration of grains and solid inclusions within the bulk microstructure. The experiments performed in this thesis form the foundation of a novel characterization technique capable of mapping variations in sample composition which could be extended to a wide range of dense polycrystalline heterogeneous materials.
Interparticle interactions effects on the magnetic order in surface of FeO4 nanoparticles.
Lima, E; Vargas, J M; Rechenberg, H R; Zysler, R D
2008-11-01
We report interparticle interactions effects on the magnetic structure of the surface region in Fe3O4 nanoparticles. For that, we have studied a desirable system composed by Fe3O4 nanoparticles with (d) = 9.3 nm and a narrow size distribution. These particles present an interesting morphology constituted by a crystalline core and a broad (approximately 50% vol.) disordered superficial shell. Two samples were prepared with distinct concentrations of the particles: weakly-interacting particles dispersed in a polymer and strongly-dipolar-interacting particles in a powder sample. M(H, T) measurements clearly show that strong dipolar interparticle interaction modifies the magnetic structure of the structurally disordered superficial shell. Consequently, we have observed drastically distinct thermal behaviours of magnetization and susceptibility comparing weakly- and strongly-interacting samples for the temperature range 2 K < T < 300 K. We have also observed a temperature-field dependence of the hysteresis loops of the dispersed sample that is not observed in the hysteresis loops of the powder one.
A study on magneto-optic properties of CoxMg1-xFe2O4 nanoferrofluids
NASA Astrophysics Data System (ADS)
Karthick, R.; Ramachandran, K.; Srinivasan, R.
2018-04-01
Nanoparticles of CoxMg1-xFe2O4 (x = 0.1, 0.5, 0.9) were synthesized using chemical co-precipitation method. Characterization by X-ray diffraction technique confirmed the formation of cubic crystalline structure and the crystallite size of the samples obtained using Debye-Scherrer approximation were found to increase with increasing cobalt substitution. Surface morphology and the Chemical composition of the samples were visualized using scanning electron microscope (SEM) with energy dispersive analysis of X-rays (EDAX). Room temperature magnetic parameters of the nanoparticles by vibrating sample magnetometer (VSM) revealed the magnetic properties such as Saturation magnetization (Ms), Remanent magnetization (Mr) and Coercive field (Hc) found to increase with increasing cobalt substitution. Faraday rotation measurements on CoxMg1-xFe2O4 ferrofluids exhibited increase in rotation with cobalt substitution. Further, there is an increase in Faraday rotation with increasing magnetic field for all the samples.
NASA Astrophysics Data System (ADS)
Ni, W.; Zhang, Z.; Sun, G.
2017-12-01
Several large-scale maps of forest AGB have been released [1] [2] [3]. However, these existing global or regional datasets were only approximations based on combining land cover type and representative values instead of measurements of actual forest aboveground biomass or forest heights [4]. Rodríguez-Veiga et al[5] reported obvious discrepancies of existing forest biomass stock maps with in-situ observations in Mexico. One of the biggest challenges to the credibility of these maps comes from the scale gaps between the size of field sampling plots used to develop(or validate) estimation models and the pixel size of these maps and the availability of field sampling plots with sufficient size for the verification of these products [6]. It is time-consuming and labor-intensive to collect sufficient number of field sampling data over the plot size of the same as resolutions of regional maps. The smaller field sampling plots cannot fully represent the spatial heterogeneity of forest stands as shown in Figure 1. Forest AGB is directly determined by forest heights, diameter at breast height (DBH) of each tree, forest density and tree species. What measured in the field sampling are the geometrical characteristics of forest stands including the DBH, tree heights and forest densities. The LiDAR data is considered as the best dataset for the estimation of forest AGB. The main reason is that LiDAR can directly capture geometrical features of forest stands by its range detection capabilities.The remotely sensed dataset, which is capable of direct measurements of forest spatial structures, may serve as a ladder to bridge the scale gaps between the pixel size of regional maps of forest AGB and field sampling plots. Several researches report that TanDEM-X data can be used to characterize the forest spatial structures [7, 8]. In this study, the forest AGB map of northeast China were produced using ALOS/PALSAR data taking TanDEM-X data as a bridges. The TanDEM-X InSAR data used in this study and forest AGB map was shown in Figure 2. The technique details and further analysis will be given in the final report. AcknowledgmentThis work was supported in part by the National Basic Research Program of China (Grant No. 2013CB733401, 2013CB733404), and in part by the National Natural Science Foundation of China (Grant Nos. 41471311, 41371357, 41301395).
A novel method for correcting scanline-observational bias of discontinuity orientation
Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong
2016-01-01
Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249
Analysis of replication factories in human cells by super-resolution light microscopy
2009-01-01
Background DNA replication in human cells is performed in discrete sub-nuclear locations known as replication foci or factories. These factories form in the nucleus during S phase and are sites of DNA synthesis and high local concentrations of enzymes required for chromatin replication. Why these structures are required, and how they are organised internally has yet to be identified. It has been difficult to analyse the structure of these factories as they are small in size and thus below the resolution limit of the standard confocal microscope. We have used stimulated emission depletion (STED) microscopy, which improves on the resolving power of the confocal microscope, to probe the structure of these factories at sub-diffraction limit resolution. Results Using immunofluorescent imaging of PCNA (proliferating cell nuclear antigen) and RPA (replication protein A) we show that factories are smaller in size (approximately 150 nm diameter), and greater in number (up to 1400 in an early S- phase nucleus), than is determined by confocal imaging. The replication inhibitor hydroxyurea caused an approximately 40% reduction in number and a 30% increase in diameter of replication factories, changes that were not clearly identified by standard confocal imaging. Conclusions These measurements for replication factory size now approach the dimensions suggested by electron microscopy. This agreement between these two methods, that use very different sample preparation and imaging conditions, suggests that we have arrived at a true measurement for the size of these structures. The number of individual factories present in a single nucleus that we measure using this system is greater than has been previously reported. This analysis therefore suggests that each replication factory contains fewer active replication forks than previously envisaged. PMID:20015367
MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant
2014-01-01
Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurationsmore » are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.« less
Basunia, S; Landsberger, S
2001-10-01
Pantex firing range soil samples were analyzed for Pb, Cu, Sb, Zn, and As. One hundred ninety-seven samples were collected from the firing range and vicinity area. There was a lack of knowledge about the distribution of Pb in the firing range, so a random sampling with proportional allocation was chosen. Concentration levels of Pb and Cu in the firing range were found to be in the range of 11-4675 and 13-359 mg/kg, respectively. Concentration levels of Sb were found to be in the range of 1-517 mg/kg. However, the Zn and As concentration levels were close to average soil background levels. The Sn concentration level was expected to be higher in the Pantex firing range soil samples. However, it was found to be below the neutron activation analysis (NAA) detection limit of 75 mg/kg. Enrichment factor analysis showed that Pb and Sb were highly enriched in the firing range with average magnitudes of 55 and 90, respectively. Cu was enriched approximately 6 times more than the usual soil concentration levels. Toxicity characteristic leaching procedure (TCLP) was carried out on size-fractionated homogeneous soil samples. The concentration levels of Pb in leachates were found to be approximately 12 times higher than the U.S. Environmental Protection Agency (EPA) regulatory concentration level of 5 mg/L. Sequential extraction (SE) was also performed to characterize Pb and other trace elements into five different fractions. The highest Pb fraction was found with organic matter in the soil.
Potential risks of the residue from Samarco's mine dam burst (Bento Rodrigues, Brazil).
Segura, Fabiana Roberta; Nunes, Emilene Arusievicz; Paniz, Fernanda Pollo; Paulelli, Ana Carolina Cavalheiro; Rodrigues, Gabriela Braga; Braga, Gilberto Úbida Leite; Dos Reis Pedreira Filho, Walter; Barbosa, Fernando; Cerchiaro, Giselle; Silva, Fábio Ferreira; Batista, Bruno Lemos
2016-11-01
On November 5th, 2015, Samarco's iron mine dam - called Fundão - spilled 50-60 million m 3 of mud into Gualaxo do Norte, a river that belongs to Rio Doce Basin. Approximately 15 km 2 were flooded along the rivers Gualaxo do Norte, Carmo and Doce, reaching the Atlantic Ocean on November 22nd, 2015. Six days after, our group collected mud, soil and water samples in Bento Rodrigues (Minas Gerais, Brazil), which was the first impacted area. Overall, the results, water samples - potable and surface water from river - presented chemical elements concentration according to Brazilian environmental legislations, except silver concentration in surface water that ranged from 1.5 to 1087 μg L -1 . In addition, water mud-containing presented Fe and Mn concentrations approximately 4-fold higher than the maximum limit for water bodies quality assessment, according to Brazilian laws. Mud particle size ranged from 1 to 200 μm. SEM-EDS spot provided us some semi quantitative data. Leaching/extraction tests suggested that Ba, Pb, As, Sr, Fe, Mn and Al have high potential mobilization from mud to water. Low microbial diversity in mud samples compared to background soil samples. Toxicological bioassays (HepG2 and Allium cepa) indicated potential risks of cytotoxicity and DNA damage in mud and soil samples used in both assays. The present study provides preliminary information aiming to collaborate to the development of future works for monitoring and risk assessment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lico, M.S.; Welch, A.H.; Hughes, J.L.
1986-01-01
The U.S. Geological Survey collected an extensive amount of hydrogeologic data from the shallow alluvial aquifer at two study sites near Fallon, Nevada, from 1984 though 1985. These data were collected as part of a study to determine the geochemical controls on the mobility of arsenic and other trace elements in shallow groundwater systems. The main study area is approximately 7 miles south of Fallon. A subsidiary study area is about 8 miles east of Fallon. The data collected include lithologic logs and water level altitudes for the augered sampling wells and piezometers, and determinations of arsenic and selenium content, grain size, porosity, hydraulic conductivity, and mineralogy for sediment samples from cores. (USGS)
The effect of sampling rate on observed statistics in a correlated random walk
Rosser, G.; Fletcher, A. G.; Maini, P. K.; Baker, R. E.
2013-01-01
Tracking the movement of individual cells or animals can provide important information about their motile behaviour, with key examples including migrating birds, foraging mammals and bacterial chemotaxis. In many experimental protocols, observations are recorded with a fixed sampling interval and the continuous underlying motion is approximated as a series of discrete steps. The size of the sampling interval significantly affects the tracking measurements, the statistics computed from observed trajectories, and the inferences drawn. Despite the widespread use of tracking data to investigate motile behaviour, many open questions remain about these effects. We use a correlated random walk model to study the variation with sampling interval of two key quantities of interest: apparent speed and angle change. Two variants of the model are considered, in which reorientations occur instantaneously and with a stationary pause, respectively. We employ stochastic simulations to study the effect of sampling on the distributions of apparent speeds and angle changes, and present novel mathematical analysis in the case of rapid sampling. Our investigation elucidates the complex nature of sampling effects for sampling intervals ranging over many orders of magnitude. Results show that inclusion of a stationary phase significantly alters the observed distributions of both quantities. PMID:23740484
Estimating haplotype frequencies by combining data from large DNA pools with database information.
Gasbarra, Dario; Kulathinal, Sangita; Pirinen, Matti; Sillanpää, Mikko J
2011-01-01
We assume that allele frequency data have been extracted from several large DNA pools, each containing genetic material of up to hundreds of sampled individuals. Our goal is to estimate the haplotype frequencies among the sampled individuals by combining the pooled allele frequency data with prior knowledge about the set of possible haplotypes. Such prior information can be obtained, for example, from a database such as HapMap. We present a Bayesian haplotyping method for pooled DNA based on a continuous approximation of the multinomial distribution. The proposed method is applicable when the sizes of the DNA pools and/or the number of considered loci exceed the limits of several earlier methods. In the example analyses, the proposed model clearly outperforms a deterministic greedy algorithm on real data from the HapMap database. With a small number of loci, the performance of the proposed method is similar to that of an EM-algorithm, which uses a multinormal approximation for the pooled allele frequencies, but which does not utilize prior information about the haplotypes. The method has been implemented using Matlab and the code is available upon request from the authors.
Multilocus lod scores in large pedigrees: combination of exact and approximate calculations.
Tong, Liping; Thompson, Elizabeth
2008-01-01
To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some 'key' individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. (c) 2007 S. Karger AG, Basel
Multilocus Lod Scores in Large Pedigrees: Combination of Exact and Approximate Calculations
Tong, Liping; Thompson, Elizabeth
2007-01-01
To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some ‘key’ individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. PMID:17934317
Using a Novel Optical Sensor to Characterize Methane Ebullition Processes
NASA Astrophysics Data System (ADS)
Delwiche, K.; Hemond, H.; Senft-Grupp, S.
2015-12-01
We have built a novel bubble size sensor that is rugged, economical to build, and capable of accurately measuring methane bubble sizes in aquatic environments over long deployment periods. Accurate knowledge of methane bubble size is important to calculating atmospheric methane emissions from in-land waters. By routing bubbles past pairs of optical detectors, the sensor accurately measures bubbles sizes for bubbles between 0.01 mL and 1 mL, with slightly reduced accuracy for bubbles from 1 mL to 1.5 mL. The sensor can handle flow rates up to approximately 3 bubbles per second. Optional sensor attachments include a gas collection chamber for methane sampling and volume verification, and a detachable extension funnel to customize the quantity of intercepted bubbles. Additional features include a data-cable running from the deployed sensor to a custom surface buoy, allowing us to download data without disturbing on-going bubble measurements. We have successfully deployed numerous sensors in Upper Mystic Lake at depths down to 18 m, 1 m above the sediment. The resulting data gives us bubble size distributions and the precise timing of bubbling events over a period of several months. In addition to allowing us to characterize typical bubble size distributions, this data allows us to draw important conclusions about temporal variations in bubble sizes, as well as bubble dissolution rates within the water column.
Huang, Jianping; Marschilok, Amy C.; Takeuchi, Esther S.; ...
2016-03-07
We study silver vanadium phosphorus oxide, Ag 2VO 2PO 4, that is a promising cathode material for Li batteries due in part to its large capacity and high current capability. Herein, a new synthesis of Ag 2VO 2PO 4 based on microwave heating is presented, where the reaction time is reduced by approximately 100× relative to other reported methods, and the crystallite size is controlled via synthesis temperature, showing a linear correlation of crystallite size with temperature. Notably, under galvanostatic reduction, the Ag 2VO 2PO 4 sample with the smallest crystallite size delivers the highest capacity and shows the highestmore » loaded voltage. Further, pulse discharge tests show a significant resistance decrease during the initial discharge coincident with the formation of Ag metal. Thus, the magnitude of the resistance decrease observed during pulse tests depends on the Ag 2VO 2PO 4 crystallite size, with the largest resistance decrease observed for the smallest crystallite size. Additional electrochemical measurements indicate a quasi-reversible redox reaction involving Li + insertion/deinsertion, with capacity fade due to structural changes associated with the discharge/charge process. In summary, this work demonstrates a faster synthetic approach for bimetallic polyanionic materials which also provides the opportunity for tuning of electrochemical properties through control of material physical properties such as crystallite size.« less
Yamamura, Hiroshi; Kimura, Katsuki; Higuchi, Kumiko; Watanabe, Yoshimasa; Ding, Qing; Hafuka, Akira
2015-12-15
While low-pressure membrane filtration processes (i.e., microfiltration and ultrafiltration) can offer precise filtration than sand filtration, they pose the problem of reduced efficiency due to membrane fouling. Although many studies have examined membrane fouling by organic substances, there is still not enough data available concerning membrane fouling by inorganic substances. The present research investigated changes in the amounts of inorganic components deposited on the surface of membrane filters over time using membrane specimens sampled thirteen times at arbitrary time intervals during pilot testing in order to determine the mechanism by which irreversible fouling by inorganic substances progresses. The experiments showed that the inorganic components that primarily contribute to irreversible fouling vary as filtration continues. It was discovered that, in the initial stage of operation, the main membrane-fouling substance was iron, whereas the primary membrane-fouling substances when operation finished were manganese, calcium, and silica. The amount of iron accumulated on the membrane increased up to the thirtieth day of operation, after which it reached a steady state. After the accumulation of iron became static, subsequent accumulation of manganese was observed. The fact that the removal rates of these inorganic components also increased gradually shows that the size of the exclusion pores of the membrane filter narrows as operation continues. Studying particle size distributions of inorganic components contained in source water revealed that while many iron particles are approximately the same size as membrane pores, the fraction of manganese particles slightly smaller than the pores in diameter was large. From these results, it is surmised that iron particles approximately the same size as the pores block them soon after the start of operation, and as the membrane pores narrow with the development of fouling, they become further blocked by manganese particles approximately the same size as the narrowed pores. Calcium and silica are assumed to accumulate on the membrane due to their cross-linking action and/or complex formation with organic substances such as humic compounds. The present research is the first to clearly show that the inorganic components that contribute to membrane fouling differ according to the stage of membrane fouling progression; the information obtained by this research should enable chemical cleaning or operational control in accordance with the stage of membrane fouling progression. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mishkin, Arie; Cohen-Hadad, Gerard; Lang, Michal; Kofler, Esther; Vardi, Yoel; Schrira, Samuel; Heresco-Levy, Uriel
2003-01-01
The role of the sick funds in the delivery of mental health outpatient services is expected to increase in Israel in the near future. Consequently there is an urgent need for assessing relevant parameters of the patient populations and treatment patterns presently characterizing sick fund's mental health delivery frameworks. During a random census month all patients who referred to Kupat Holim Meuhedet mental health services in Jerusalem district completed structured questionnaires including demographic, medical and mental health history data, and the Symptom Checklist 90 (SCL-90). The professionals who performed the screening assessments filled in a structured questionnaire referring to clinical status parameters, diagnosis and treatment decisions. Eighty-three new referrals were screened during the period studied, out of which 54 (65%) were absorbed within the treatment framework of the sick fund. Women patients were twice as numerous as men. The sample was heterogeneous in terms of demographic characteristics and included relatively high rates of recent physical injury and medical hospitalization. Only approximately 10% of the patients had been referred by their family doctor and only approximately 3% had psychotic disorders. The symptom profile reported was characterized by mild to moderate severity and the most common DSM-IV diagnoses made were depressive, anxiety adjustment and personality disorders. About 50% of the sample was recommended individual psychotherapy and though not mutually exclusive approximately 40% psychotropic medication. Relatively small sample size and catchment area. Before generalization of the findings, larger scale studies are warranted. This pilot study offers a rigorous examination of the content of care of a small sick fund mental health delivery system. Our findings may be instrumental in the development of new services and adaptations to changes in mental health policies.
Chan, K L Andrew; Kazarian, Sergei G
2008-10-01
Attenuated total reflection-Fourier transform infrared (ATR-FT-IR) imaging is a very useful tool for capturing chemical images of various materials due to the simple sample preparation and the ability to measure wet samples or samples in an aqueous environment. However, the size of the array detector used for image acquisition is often limited and there is usually a trade off between spatial resolution and the field of view (FOV). The combination of mapping and imaging can be used to acquire images with a larger FOV without sacrificing spatial resolution. Previous attempts have demonstrated this using an infrared microscope and a Germanium hemispherical ATR crystal to achieve images of up to 2.5 mm x 2.5 mm but with varying spatial resolution and depth of penetration across the imaged area. In this paper, we demonstrate a combination of mapping and imaging with a different approach using an external optics housing for large ATR accessories and inverted ATR prisms to achieve ATR-FT-IR images with a large FOV and reasonable spatial resolution. The results have shown that a FOV of 10 mm x 14 mm can be obtained with a spatial resolution of approximately 40-60 microm when using an accessory that gives no magnification. A FOV of 1.3 mm x 1.3 mm can be obtained with spatial resolution of approximately 15-20 microm when using a diamond ATR imaging accessory with 4x magnification. No significant change in image quality such as spatial resolution or depth of penetration has been observed across the whole FOV with this method and the measurement time was approximately 15 minutes for an image consisting of 16 image tiles.
NASA Astrophysics Data System (ADS)
DeBlois, Elisabeth M.; Paine, Michael D.; Kilgour, Bruce W.; Tracy, Ellen; Crowley, Roger D.; Williams, Urban P.; Janes, G. Gregory
2014-12-01
This paper describes sediment composition at the Terra Nova offshore oil development. The Terra Nova Field is located on the Grand Banks approximately 350 km southeast of Newfoundland, Canada, at an approximate water depth of 100 m. Surface sediment samples (upper 3 cm) were collected for chemical and particle size analyses at the site pre-development (1997) and in 2000-2002, 2004, 2006, 2008 and 2010. Approximately 50 stations have been sampled in each program year, with stations extending from less than 1 km to a maximum of 20 km from source (drill centres) along five gradients, extending to the southeast, southwest, northeast, northwest and east of Terra Nova. Results show that Terra Nova sediments were contaminated with >C10-C21 hydrocarbons and barium-the two main constituents of synthetic-based drilling muds used at the site. Highest levels of contamination occurred within 1 to 2 km from source, consistent with predictions from drill cuttings dispersion modelling. The strength of distance gradients for >C10-C21 hydrocarbons and barium, and overall levels, generally increased as drilling progressed but decreased from 2006 to 2010, coincident with a reduction in drilling. As seen at other offshore oil development sites, metals other than barium, sulphur and sulphide levels were elevated and sediment fines content was higher in the immediate vicinity (less than 0.5 km) of drill centres in some sampling years; but there was no strong evidence of project-related alterations of these variables. Overall, sediment contamination at Terra Nova was spatially limited and only the two major constituents of synthetic-based drilling muds used at the site, >C10-C21 hydrocarbons and barium, showed clear evidence of project-related alternations.
Elahi, Fanny M; Marx, Gabe; Cobigo, Yann; Staffaroni, Adam M; Kornak, John; Tosun, Duygu; Boxer, Adam L; Kramer, Joel H; Miller, Bruce L; Rosen, Howard J
2017-01-01
Degradation of white matter microstructure has been demonstrated in frontotemporal lobar degeneration (FTLD) and Alzheimer's disease (AD). In preparation for clinical trials, ongoing studies are investigating the utility of longitudinal brain imaging for quantification of disease progression. To date only one study has examined sample size calculations based on longitudinal changes in white matter integrity in FTLD. To quantify longitudinal changes in white matter microstructural integrity in the three canonical subtypes of frontotemporal dementia (FTD) and AD using diffusion tensor imaging (DTI). 60 patients with clinical diagnoses of FTD, including 27 with behavioral variant frontotemporal dementia (bvFTD), 14 with non-fluent variant primary progressive aphasia (nfvPPA), and 19 with semantic variant PPA (svPPA), as well as 19 patients with AD and 69 healthy controls were studied. We used a voxel-wise approach to calculate annual rate of change in fractional anisotropy (FA) and mean diffusivity (MD) in each group using two time points approximately one year apart. Mean rates of change in FA and MD in 48 atlas-based regions-of-interest, as well as global measures of cognitive function were used to calculate sample sizes for clinical trials (80% power, alpha of 5%). All FTD groups showed statistically significant baseline and longitudinal white matter degeneration, with predominant involvement of frontal tracts in the bvFTD group, frontal and temporal tracts in the PPA groups and posterior tracts in the AD group. Longitudinal change in MD yielded a larger number of regions with sample sizes below 100 participants per therapeutic arm in comparison with FA. SvPPA had the smallest sample size based on change in MD in the fornix (n = 41 participants per study arm to detect a 40% effect of drug), and nfvPPA and AD had their smallest sample sizes based on rate of change in MD within the left superior longitudinal fasciculus (n = 49 for nfvPPA, and n = 23 for AD). BvFTD generally showed the largest sample size estimates (minimum n = 140 based on MD in the corpus callosum). The corpus callosum appeared to be the best region for a potential study that would include all FTD subtypes. Change in global measure of functional status (CDR box score) yielded the smallest sample size for bvFTD (n = 71), but clinical measures were inferior to white matter change for the other groups. All three of the canonical subtypes of FTD are associated with significant change in white matter integrity over one year. These changes are consistent enough that drug effects in future clinical trials could be detected with relatively small numbers of participants. While there are some differences in regions of change across groups, the genu of the corpus callosum is a region that could be used to track progression in studies that include all subtypes.
Size-Frequency Distribution of Small Lunar Craters: Widening with Degradation and Crater Lifetime
NASA Astrophysics Data System (ADS)
Ivanov, B. A.
2018-01-01
The review and new measurements are presented for depth/diameter ratio and slope angle evolution during small ( D < 1 km) lunar impact craters aging (degradation). Comparative analysis of available data on the areal cratering density and on the crater degradation state for selected craters, dated with returned Apollo samples, in the first approximation confirms Neukum's chronological model. The uncertainty of crater retention age due to crater degradational widening is estimated. The collected and analyzed data are discussed to be used in the future updating of mechanical models for lunar crater aging.
1981-09-01
through a microfine glass fiber tilter (Reeve Angel 984-H with a pore size of approximately 0.45 pnm). This...buffer (pH 7.7) in order to extract the nucleotides present in the water sample. The test tubes containing the extracts were labeled and stored frozen...into a series of disposable glass cuvettes (12x75mm) and placed in a test tube rack in an incubating water bath at 300 C. Each tube was allowed to
Grima, Ramon
2011-11-01
The mesoscopic description of chemical kinetics, the chemical master equation, can be exactly solved in only a few simple cases. The analytical intractability stems from the discrete character of the equation, and hence considerable effort has been invested in the development of Fokker-Planck equations, second-order partial differential equation approximations to the master equation. We here consider two different types of higher-order partial differential approximations, one derived from the system-size expansion and the other from the Kramers-Moyal expansion, and derive the accuracy of their predictions for chemical reactive networks composed of arbitrary numbers of unimolecular and bimolecular reactions. In particular, we show that the partial differential equation approximation of order Q from the Kramers-Moyal expansion leads to estimates of the mean number of molecules accurate to order Ω(-(2Q-3)/2), of the variance of the fluctuations in the number of molecules accurate to order Ω(-(2Q-5)/2), and of skewness accurate to order Ω(-(Q-2)). We also show that for large Q, the accuracy in the estimates can be matched only by a partial differential equation approximation from the system-size expansion of approximate order 2Q. Hence, we conclude that partial differential approximations based on the Kramers-Moyal expansion generally lead to considerably more accurate estimates in the mean, variance, and skewness than approximations of the same order derived from the system-size expansion.
Season, molt, and body size influence mercury concentrations in grebes
Hartman, Christopher; Ackerman, Joshua T.; Herzog, Mark; Eagles-Smith, Collin A.
2017-01-01
We studied seasonal and physiological influences on mercury concentrations in western grebes (Aechmophorus occidentalis) and Clark's grebes (A. occidentalis) across 29 lakes and reservoirs in California, USA. Additionally, at three of these lakes, we conducted a time series study, in which we repeatedly sampled grebe blood mercury concentrations during the spring, summer, and early fall. Grebe blood mercury concentrations were higher among males (0.61 ± 0.12 μg/g ww) than females (0.52 ± 0.10 μg/g ww), higher among Clark's grebes (0.58 ± 0.12 μg/g ww) than western grebes (0.51 ± 0.10 μg/g ww), and exhibited a strong seasonal pattern (decreasing by 60% from spring to fall). Grebe blood THg concentrations exhibited a shallow, inverse U-shaped pattern with body size, and was lowest among the smallest and largest grebes. Further, the relationship between grebe blood mercury concentrations and wing primary feather molt exhibited a shallow U-shaped pattern, where mercury concentrations were highest among birds that had not yet begun molting, decreased approximately 24% between pre-molt and late molt, and increased approximately 19% from late molt to post-molt. Because grebes did not begin molting until mid-summer, lower grebe blood mercury concentrations observed in late summer and early fall were consistent with the onset of primary feather molt. However, because sampling date was a much stronger predictor of grebe mercury concentrations than molt, other seasonally changing environmental factors likely played a larger role than molt in the seasonal variation in grebe mercury concentrations. In the time series study, we found that seasonal trends in grebe mercury concentrations were not consistent among lakes, indicating that lake-specific variation in mercury dynamics influence the overall seasonal decline in grebe blood mercury concentrations. These results highlight the importance of accounting for sampling date, as well as ecological processes that may influence mercury concentrations, when developing monitoring programs to assess site-specific exposure risk of mercury to wildlife.
NASA Technical Reports Server (NTRS)
Livermore, R. C.; Jones, T.; Richard, J.; Bower, R. G.; Ellis, R. S.; Swinbank, A. M.; Rigby, J. R.; Smail, Ian; Arribas, S.; Rodriguez-Zaurin, J.;
2013-01-01
We present Hubble Space Telescope/Wide Field Camera 3 narrow-band imaging of the Ha emission in a sample of eight gravitationally lensed galaxies at z = 1-1.5. The magnification caused by the foreground clusters enables us to obtain a median source plane spatial resolution of 360 pc, as well as providing magnifications in flux ranging from approximately 10× to approximately 50×. This enables us to identify resolved star-forming HII regions at this epoch and therefore study their Ha luminosity distributions for comparisons with equivalent samples at z approximately 2 and in the local Universe. We find evolution in the both luminosity and surface brightness of HII regions with redshift. The distribution of clump properties can be quantified with an HII region luminosity function, which can be fit by a power law with an exponential break at some cut-off, and we find that the cut-off evolves with redshift. We therefore conclude that 'clumpy' galaxies are seen at high redshift because of the evolution of the cut-off mass; the galaxies themselves follow similar scaling relations to those at z = 0, but their HII regions are larger and brighter and thus appear as clumps which dominate the morphology of the galaxy. A simple theoretical argument based on gas collapsing on scales of the Jeans mass in a marginally unstable disc shows that the clumpy morphologies of high-z galaxies are driven by the competing effects of higher gas fractions causing perturbations on larger scales, partially compensated by higher epicyclic frequencies which stabilize the disc.
Seasonal to interannual morphodynamics along a high-energy dissipative littoral cell
Ruggiero, P.; Kaminsky, G.M.; Gelfenbaum, G.; Voigt, B.
2005-01-01
A beach morphology monitoring program was initiated during summer 1997 along the Columbia River littoral cell (CRLC) on the coasts of northwest Oregon and southwest Washington, USA. This field program documents the seasonal through interannual morphological variability of these high-energy dissipative beaches over a variety of spatial scales. Following the installation of a dense network of geodetic control monuments, a nested sampling scheme consisting of cross-shore topographic beach profiles, three-dimensional topographic beach surface maps, nearshore bathymetric surveys, and sediment size distribution analyses was initiated. Beach monitoring is being conducted with state-of-the-art real-time kinematic differential global positioning system survey methods that combine both high accuracy and speed of measurement. Sampling methods resolve variability in beach morphology at alongshore length scales of approximately 10 meters to approximately 100 kilometers and cross-shore length scales of approximately 1 meter to approximately 2 kilometers. During the winter of 1997/1998, coastal change in the US Pacific Northwest was greatly influenced by one of the strongest El Nin??o events on record. Steeper than typical southerly wave angles resulted in alongshore sediment transport gradients and shoreline reorientation on a regional scale. The La Nin??a of 1998/1999, dominated by cross-shore processes associated with the largest recorded wave year in the region, resulted in net beach erosion along much of the littoral cell. The monitoring program successfully documented the morphological response to these interannual forcing anomalies as well as the subsequent beach recovery associated with three consecutive moderate wave years. These morphological observations within the CRLC can be generalized to explain overall system patterns; however, distinct differences in large-scale coastal behavior (e.g., foredune ridge morphology, sandbar morphometrics, and nearshore beach slopes) are not readily explained or understood.
Roelfsema, Ferdinand; Pereira, Alberto M; Adriaanse, Ria; Endert, Erik; Fliers, Eric; Romijn, Johannes A; Veldhuis, Johannes D
2010-02-01
Twenty-four-hour TSH secretion profiles in primary hypothyroidism have been analyzed with methods no longer in use. The insights afforded by earlier methods are limited. We studied TSH secretion in patients with primary hypothyroidism (eight patients with severe and eight patients with mild hypothyroidism) with up-to-date analytical tools and compared the results with outcomes in 38 healthy controls. Patients and controls underwent a 24-h study with 10-min blood sampling. TSH data were analyzed with a newly developed automated deconvolution program, approximate entropy, spikiness assessment, and cosinor regression. Both basal and pulsatile TSH secretion rates were increased in hypothyroid patients, the latter by increased burst mass with unchanged frequency. Secretory regularity (approximate entropy) was diminished, and spikiness was increased only in patients with severe hypothyroidism. A diurnal TSH rhythm was present in all but two patients, although with an earlier acrophase in severe hypothyroidism. The estimated slow component of the TSH half-life was shortened in all patients. Increased TSH concentrations in hypothyroidism are mediated by amplification of basal secretion and burst size. Secretory abnormalities quantitated by approximate entropy and spikiness were only present in patients with severe disease and thus are possibly related to the increased thyrotrope cell mass.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-29
... deer metapodial bone; approximately 18,444 glass beads of varying size and color; and 36 beads made... bone fragments; one bone comb; one pottery sherd; approximately 10,748 glass beads of various sizes and... razors, ``C'' bracelets, cones used as tinklers, finger rings, a knife, an awl with a bone handle and an...
NASA Technical Reports Server (NTRS)
Mair, R. W.; Hurlimann, M. D.; Sen, P. N.; Schwartz, L. M.; Patz, S.; Walsworth, R. L.
2001-01-01
We have extended the utility of NMR as a technique to probe porous media structure over length scales of approximately 100-2000 microm by using the spin 1/2 noble gas 129Xe imbibed into the system's pore space. Such length scales are much greater than can be probed with NMR diffusion studies of water-saturated porous media. We utilized Pulsed Gradient Spin Echo NMR measurements of the time-dependent diffusion coefficient, D(t), of the xenon gas filling the pore space to study further the measurements of both the pore surface-area-to-volume ratio, S/V(p), and the tortuosity (pore connectivity) of the medium. In uniform-size glass bead packs, we observed D(t) decreasing with increasing t, reaching an observed asymptote of approximately 0.62-0.65D(0), that could be measured over diffusion distances extending over multiple bead diameters. Measurements of D(t)/D(0) at differing gas pressures showed this tortuosity limit was not affected by changing the characteristic diffusion length of the spins during the diffusion encoding gradient pulse. This was not the case at the short time limit, where D(t)/D(0) was noticeably affected by the gas pressure in the sample. Increasing the gas pressure, and hence reducing D(0) and the diffusion during the gradient pulse served to reduce the previously observed deviation of D(t)/D(0) from the S/V(p) relation. The Pade approximation is used to interpolate between the long and short time limits in D(t). While the short time D(t) points lay above the interpolation line in the case of small beads, due to diffusion during the gradient pulse on the order of the pore size, it was also noted that the experimental D(t) data fell below the Pade line in the case of large beads, most likely due to finite size effects.
NASA Astrophysics Data System (ADS)
Yoshida, Tomonori; Muto, Daiki; Tamai, Tomoya; Suzuki, Shinsuke
2018-04-01
Porous aluminum alloy with aligned unidirectional pores was fabricated by dipping A1050 tubes into A6061 semi-solid slurry. The porous aluminum alloy was processed through Equal-channel Angular Extrusion (ECAE) while preventing cracking and maintaining both the pore size and porosity by setting the insert material and loading back pressure. The specific compressive yield strength of the sample aged after 13 passes of ECAE was approximately 2.5 times higher than that of the solid-solutionized sample without ECAE. Both the energy absorption E V and energy absorption efficiency η V after four passes of ECAE were approximately 1.2 times higher than that of the solid-solutionized sample without ECAE. The specific yield strength was improved via work hardening and precipitation following dynamic aging during ECAE. E V was improved by the application of high compressive stress at the beginning of the compression owing to work hardening via ECAE. η V was improved by a steep increase of stress at low compressive strain and by a gradual increase of stress in the range up to 50 pct of compressive strain. The gradual increase of stress was caused by continuous shear fracture in the metallic part, which was due to the high dislocation density and existence of unidirectional pores parallel to the compressive direction in the structure.
NASA Astrophysics Data System (ADS)
Yoshida, Tomonori; Muto, Daiki; Tamai, Tomoya; Suzuki, Shinsuke
2018-06-01
Porous aluminum alloy with aligned unidirectional pores was fabricated by dipping A1050 tubes into A6061 semi-solid slurry. The porous aluminum alloy was processed through Equal-channel Angular Extrusion (ECAE) while preventing cracking and maintaining both the pore size and porosity by setting the insert material and loading back pressure. The specific compressive yield strength of the sample aged after 13 passes of ECAE was approximately 2.5 times higher than that of the solid-solutionized sample without ECAE. Both the energy absorption E V and energy absorption efficiency η V after four passes of ECAE were approximately 1.2 times higher than that of the solid-solutionized sample without ECAE. The specific yield strength was improved via work hardening and precipitation following dynamic aging during ECAE. E V was improved by the application of high compressive stress at the beginning of the compression owing to work hardening via ECAE. η V was improved by a steep increase of stress at low compressive strain and by a gradual increase of stress in the range up to 50 pct of compressive strain. The gradual increase of stress was caused by continuous shear fracture in the metallic part, which was due to the high dislocation density and existence of unidirectional pores parallel to the compressive direction in the structure.
Mercury in the blood and eggs of American kestrels fed methylmercury chloride
French, J.B.; Bennett, R.S.; Rossmann, R.
2010-01-01
American kestrels (Falco sparverius) were fed diets containing methylmercury chloride (MeHg) at 0, 0.6, 1.7, 2.8, 3.9, or 5.0 ??g/g (dry wt) starting approximately eight weeks before the onset of egg laying. Dietary treatment was terminated after 12 to 14 weeks, and unhatched eggs were collected for Hg analysis. Blood samples were collected after four weeks of treatment and the termination of the study (i.e., 12-14 weeks of treatment). Clutch size decreased at dietary concentrations above 2.8 ??g/g. The average total mercury concentration in clutches of eggs and in the second egg laid (i.e., egg B) increased linearly with dietary concentration. Mercury concentrations in egg B were approximately 25% lower than in the first egg laid and similar in concentration to the third egg laid. Mercury concentrations in whole blood and plasma also increased linearly with dietary concentration. Total Hg concentrations in June blood samples were lower than those in April, despite 8 to 10 weeks of additional dietary exposure to MeHg in the diet. This is likely because of excretion of Hg into growing flight feathers beginning shortly after the start of egg production. The strongest relationships between Hg concentrations in blood and eggs occurred when we used blood samples collected in April before egg laying and feather molt. ?? 2010 SETAC.
NASA Astrophysics Data System (ADS)
Loveley, Matthew R.; Marcantonio, Franco; Lyle, Mitchell; Ibrahim, Rami; Hertzberg, Jennifer E.; Schmidt, Matthew W.
2017-12-01
Here, we examine how redistribution of differing grain sizes by sediment focusing processes in Panama Basin sediments affects the use of 230Th as a constant-flux proxy. We study representative sediments of Holocene and Last Glacial Maximum (LGM) time slices from four sediment cores from two different localities close to the ridges that bound the Panama Basin. Each locality contains paired sites that are seismically interpreted to have undergone extremes in sediment redistribution, i.e., focused versus winnowed sites. Both Holocene and LGM samples from sites where winnowing has occurred contain significant amounts (up to 50%) of the 230Th within the >63 μm grain size fraction, which makes up 40-70% of the bulk sediment analyzed. For sites where focusing has occurred, Holocene and LGM samples contain the greatest amounts of 230Th (up to 49%) in the finest grain-sized fraction (<4 μm), which makes up 26-40% of the bulk sediment analyzed. There are slight underestimations of 230Th-derived mass accumulation rates (MARs) and overestimations of 230Th-derived focusing factors at focused sites, while the opposite is true for winnowed sites. Corrections made using a model by Kretschmer et al. (2010) suggest a maximum change of about 30% in 230Th-derived MARs and focusing factors at focused sites, except for our most focused site which requires an approximate 70% correction in one sample. Our 230Th-corrected 232Th flux results suggest that the boundary between hemipelagically- and pelagically-derived sediments falls between 350 and 600 km from the continental margin.
Staatz, M.H.
1983-01-01
The Bear Lodge Mountains are a small northerly trending range approximately 16 km northwest of the Black Hills in the northeast corner of Wyoming. Thorium and rare-earth deposits occur over an area of 16 km 2 in the southern part of these mountains. These deposits occur in the core of the Bear Lodge dome in a large multiple intrusive body made up principally of trachyte and phonolite. Two types of deposits are recognized: disseminated deposits and veins. The disseminated deposits are made up of altered igneous rocks cut by numerous crisscrossing veinlets. The disseminated deposits contain thorium and rare-earth minerals in a matrix consisting principally of potassium feldspar, quartz, and iron and manganese oxides. Total rare-earth content of these deposits is about 27 times that of the thorium content. The general size and shape of the disseminated deposits were outlined by making a radiometric map using a scintillation counter of the entire Bear Lodge core, an area of approximately 30 km 2 . The most favorable part of this area, which was outlined by the 40 countJs (count-per-second) isograd on the radiometric map, was sampled in detail. A total of 341 samples were taken over an area of 10.6 km 2 and analyzed for as many as 60 elements. Rare earths and thorium are the principal commodities of interest in these deposits. Total rare-earth content of these samples ranged from 47 to 27,145 ppm (parts per million), and the thorium content from 9.3 to 990 ppm. The amount of total rare earths of individual samples shows little correlation with that of thorium. Contour maps were constructed using the analytical data for total rare earths, thorium, uranium, and potassium. The total rare-earth and thorium maps can be used to define the size of the deposits based on what cut-off grade may be needed during mining. The size is large as the 2,000 ppm total rare-earth isograd encloses several areas that total 3.22 km 2 in size, and the 200 ppm thorium isograd encloses several areas that total 1.69 km 2 . These deposits could be mined by open pit. The Bear Lodge disseminated deposits have one of the largest resources of both total rare earths and thorium in the United States, and although the grade of both commodities is lower than some other deposits, their large size and relative cheapness of mining make them an important future resource. Vein deposits in the Bear Lodge Mountains include all tabular bodies at least 5 cm thick. Twenty-six veins were noted in this area. These veins are thin and short; the longest vein was traced for only 137 m. Minerals vary greatly in the amount present. Gangue minerals are commonly potassium feldspar, quartz, or cristobalite intermixed with varying amounts of limonite, hematite, and various manganese oxides. Rare earths and thorium occur in the minerals monazite, brockite, and bastnaesite. Thorium content of 35 samples ranged from 0.01 to 1.2 percent, and the total rare-earth content of 21 samples from 0.23 to 9.8 percent. Indicated reserves were calculated to a depth of one-third the exposed length of the vein. Inferred reserves lie in a block surrounding indicated reserves. Indicated reserves of all veins are only 50 t of Th0 2 and 1,360 t of total rare-earth oxides; inferred reserves are 250 t of Th0 2 and 6,810 t of total rare-earth oxides. The Bear Lodge dome, which underlies the greater part of this area, is formed by multiple intrusive bodies of Tertiary age that dome up the surrounding sedimentary rocks. In the southern part of the core, the younger intrusive bodies surround and partly replace a granite of Precambrian age. This granite is approximately 2.6 b.y. old. The sedimentary rocks around the core are (from oldest to youngest): Deadwood Formation of Late Cambrian and Early Ordovician age, Whitewood Limestone of Late Ordovician age, Pahasapa Limestone of Early Mississippian age, Minnelusa Sandstone of Pennsylvanian and Early Permian age, Opeche Formation of Permian age, Minnek
NASA Astrophysics Data System (ADS)
Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.
2018-06-01
The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.
Novel tretinoin formulations: a drug-in-cyclodextrin-in-liposome approach.
Ascenso, Andreia; Cruz, Mariana; Euletério, Carla; Carvalho, Filomena A; Santos, Nuno C; Marques, Helena C; Simões, Sandra
2013-09-01
The aims of this experimental work were the incorporation and full characterization of the system Tretinoin-in-dimethyl-beta-cyclodextrin-in-ultradeformable vesicles (Tretinoin-CyD-UDV) and Tretinoin-in-ultradeformable vesicles (Tretinoin-UDV). The Tretinoin-CyD complex was prepared by kneading and the UDV by adding soybean phosphatidylcholine (SPC) to Tween® 80 followed by an appropriate volume of sodium phosphate buffer solution to make a 10%-20% lipid suspension. The resulting suspension was brought to the final mean vesicles size, of approximately 150 nm, by sequential filtration. The physicochemical characterization was based on: the evaluation of mean particle size and polydispersity index (PI) measured by photon correlation spectroscopy (PCS) and atomic force microscopy (AFM) topographic imaging; zeta potential (ζ-potential) and the SPC concentration determined by Laser-Doppler anemometry and an enzymatic-colorimetric test, respectively. The quantification of the incorporated Tretinoin and its chemical stability (during preparation and storage) was assayed by a HPLC at 342 nm. It was possible to obtain the system Tretinoin-CyD-UDV. The mean vesicle size was the most stable parameter during experiments time course. AFM showed that Tretinoin-CyD-UDV samples were very heterogeneous in size, having three distinct subpopulations, while Tretinoin-UDV samples had only one homogeneous size population. The results of the ζ-potential measurements have shown that vesicle surface charge was low, as expected, presenting negative values. The incorporation efficiency was high, and no significant differences between Tretinoin-CyD-UDV and Tretinoin-UDV were observed. However, only Tretinoin-UDV with 20% lipid concentration formulation remained chemically stable during the evaluation period. According to our results, Tretinoin-UDV with 20% lipid concentration seems to be a better approach than Tretinoin-CyD-UDV, attending to the higher chemical stability.
Bayesian evaluation of effect size after replicating an original study
van Aert, Robbie C. M.; van Assen, Marcel A. L. M.
2017-01-01
The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method. PMID:28388646
NASA Astrophysics Data System (ADS)
Ma, Xiaoping; Langelier, Brian; Gault, Baptiste; Subramanian, Sundaresa
2017-05-01
The role of Nb in normalized and tempered Ti-bearing 13Cr5Ni2Mo super martensitic stainless steel is investigated through in-depth characterization of the bimodal chemistry and size of Nb-rich precipitates/atomic clusters and Nb in solid solution. Transmission electron microscopy and atom probe tomography are used to analyze the samples and clarify precipitates/atom cluster interactions with dislocations and austenite grain boundaries. The effect of 0.1 wt pct Nb addition on the promotion of (Ti, Nb)N-Nb(C,N) composite precipitates, as well as the retention of Nb in solution after cooling to room temperature, are analyzed quantitatively. (Ti, Nb)N-Nb(C,N) composite precipitates with average diameters of approximately 24 ± 8 nm resulting from epitaxial growth of Nb(C,N) on pre-existing (Ti,Nb)N particles, with inter-particle spacing on the order of 205 ± 68 nm, are found to be associated with mean austenite grain size of 28 ± 10 µm in the sample normalized at 1323 K (1050 °C). The calculated Zener limiting austenite grain size of 38 ± 13 µm is in agreement with the experimentally observed austenite grain size distribution. 0.08 wt pct Nb is retained in the as-normalized condition, which is able to promote Nb(C, N) atomic clusters at dislocations during tempering at 873 K (600 °C) for 2 hours, and increases the yield strength by 160 MPa, which is predicted to be close to maximum increase in strengthening effect. Retention of solute Nb before tempering also leads to it preferentially combing with C and N to form Nb(C, N) atom clusters, which suppresses the occurrence of Cr- and Mo-rich carbides during tempering.
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
Correlational effect size benchmarks.
Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A
2015-03-01
Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Valsecchi, E; Palsbøll, P; Hale, P; Glockner-Ferrari, D; Ferrari, M; Clapham, P; Larsen, F; Mattila, D; Sears, R; Sigurjonsson, J; Brown, M; Corkeron, P; Amos, B
1997-04-01
Mitochondrial DNA haplotypes of humpback whales show strong segregation between oceanic populations and between feeding grounds within oceans, but this highly structured pattern does not exclude the possibility of extensive nuclear gene flow. Here we present allele frequency data for four microsatellite loci typed across samples from four major oceanic regions: the North Atlantic (two mitochondrially distinct populations), the North Pacific, and two widely separated Antarctic regions, East Australia and the Antarctic Peninsula. Allelic diversity is a little greater in the two Antarctic samples, probably indicating historically greater population sizes. Population subdivision was examined using a wide range of measures, including Fst, various alternative forms of Slatkin's Rst, Goldstein and colleagues' delta mu, and a Monte Carlo approximation to Fisher's exact test. The exact test revealed significant heterogeneity in all but one of the pairwise comparisons between geographically adjacent populations, including the comparison between the two North Atlantic populations, suggesting that gene flow between oceans is minimal and that dispersal patterns may sometimes be restricted even in the absence of obvious barriers, such as land masses, warm water belts, and antitropical migration behavior. The only comparison where heterogeneity was not detected was the one between the two Antarctic population samples. It is unclear whether failure to find a difference here reflects gene flow between the regions or merely lack of statistical power arising from the small size of the Antarctic Peninsula sample. Our comparison between measures of population subdivision revealed major discrepancies between methods, with little agreement about which populations were most and least separated. We suggest that unbiased Rst (URst, see Goodman 1995) is currently the most reliable statistic, probably because, unlike the other methods, it allows for unequal sample sizes. However, in view of the fact that these alternative measures often contradict one another, we urge caution in the use of microsatellite data to quantify genetic distance.
Dos Santos, Fernanda Karina; Nevill, Allan; Gomes, Thayse Natacha Q F; Chaves, Raquel; Daca, Timóteo; Madeira, Aspacia; Katzmarzyk, Peter T; Prista, António; Maia, José A R
2016-05-01
Children from developed and developing countries have different anthropometric characteristics which may affect their motor performance (MP). To use the allometric approach to model the relationship between body size and MP in youth from two countries differing in socio-economic status-Portugal and Mozambique. A total of 2946 subjects, 1280 Mozambicans (688 girls) and 1666 Portuguese (826 girls), aged 10-15 years were sampled. Height and weight were measured and the reciprocal ponderal index (RPI) was computed. MP included handgrip strength, 1-mile run/walk, curl-ups and standing long jump tests. A multiplicative allometric model was adopted to adjust for body size differences across countries. Differences in MP between Mozambican and Portuguese children exist, invariably favouring the latter. The allometric models used to adjust MP for differences in body size identified the optimal body shape to be either the RPI or even more linear, i.e. approximately (height/mass(0.25)). Having adjusted the MP variables for differences in body size, the differences between Mozambican and Portuguese children were invariably reduced and, in the case of grip strength, reversed. These results reinforce the notion that significant differences exist in MP across countries, even after adjusting for differences in body size.
Barry, Adam E; Szucs, Leigh E; Reyes, Jovanni V; Ji, Qian; Wilson, Kelly L; Thompson, Bruce
2016-10-01
Given the American Psychological Association's strong recommendation to always report effect sizes in research, scholars have a responsibility to provide complete information regarding their findings. The purposes of this study were to (a) determine the frequencies with which different effect sizes were reported in published, peer-reviewed articles in health education, promotion, and behavior journals and (b) discuss implications for reporting effect size in social science research. Across a 4-year time period (2010-2013), 1,950 peer-reviewed published articles were examined from the following six health education and behavior journals: American Journal of Health Behavior, American Journal of Health Promotion, Health Education & Behavior, Health Education Research, Journal of American College Health, and Journal of School Health Quantitative features from eligible manuscripts were documented using Qualtrics online survey software. Of the 1,245 articles in the final sample that reported quantitative data analyses, approximately 47.9% (n = 597) of the articles reported an effect size. While 16 unique types of effect size were reported across all included journals, many of the effect sizes were reported with little frequency across most journals. Overall, odds ratio/adjusted odds ratio (n = 340, 50.1%), Pearson r/r(2) (n = 162, 23.8%), and eta squared/partial eta squared (n = 46, 7.2%) accounted for the most frequently used effect size. Quality research practice requires both testing statistical significance and reporting effect size. However, our study shows that a substantial portion of published literature in health education and behavior lacks consistent reporting of effect size. © 2016 Society for Public Health Education.
McAdam, Steven; Hill, Paul; Rawson, Martin; Perkins, Karen
2017-01-01
The influence of martensitic microstructure and prior austenite grain (PAG) size on the mechanical properties of novel maraging steel was studied. This was achieved by looking at two different martensitic structures with PAG sizes of approximately 40 µm and 80 µm, produced by hot rolling to different reductions. Two ageing heat-treatments were considered: both heat-treatments consisted of austenisation at 960 °C, then aging at 560 °C for 5 h, but while one was rapidly cooled the other was slow cooled and then extended aged at 480 °C for 64 h. It is shown that for the shorter ageing treatment the smaller PAG size resulted in significant improvements in strength (increase of more than 150 MPa), ductility (four times increase), creep life (almost four times increase in creep life) and fatigue life (almost doubled). Whereas, the extended aged sample showed similar changes in the fatigue life, elongation and hardness it displayed yet showed no difference in tensile strength and creep. These results display the complexity of microstructural contributions to mechanical properties in maraging steels. PMID:28773086
Darst, Melanie R.; Light, Helen M.
2007-01-01
Floodplain forests of the Apalachicola River, Florida, are drier in composition today (2006) than they were before 1954, and drying is expected to continue for at least the next 50 years. Drier forest composition is probably caused by water-level declines that occurred as a result of physical changes in the main channel after 1954 and decreased flows in spring and summer months since the 1970s. Forest plots sampled from 2004 to 2006 were compared to forests sampled in the late 1970s (1976-79) using a Floodplain Index (FI) based on species dominance weighted by the Floodplain Species Category, a value that represents the tolerance of tree species to inundation and saturation in the floodplain and consequently, the typical historic floodplain habitat for that species. Two types of analyses were used to determine forest changes over time: replicate plot analysis comparing present (2004-06) canopy composition to late 1970s canopy composition at the same locations, and analyses comparing the composition of size classes of trees on plots in late 1970s and in present forests. An example of a size class analysis would be a comparison of the composition of the entire canopy (all trees greater than 7.5 cm (centimeter) diameter at breast height (dbh)) to the composition of the large canopy tree size class (greater than or equal to 25 cm dbh) at one location. The entire canopy, which has a mixture of both young and old trees, is probably indicative of more recent hydrologic conditions than the large canopy, which is assumed to have fewer young trees. Change in forest composition from the pre-1954 period to approximately 2050 was estimated by combining results from three analyses. The composition of pre-1954 forests was represented by the large canopy size class sampled in the late 1970s. The average FI for canopy trees was 3.0 percent drier than the average FI for the large canopy tree size class, indicating that the late 1970s forests were 3.0 percent drier than pre-1954 forests. The change from the late 1970s to the present was based on replicate plot analysis. The composition of 71 replicate plots sampled from 2004 to 2006 averaged 4.4 percent drier than forests sampled in the late 1970s. The potential composition of future forests (2050 or later) was estimated from the composition of the present subcanopy tree size class (less than 7.5 cm and greater than or equal to 2.5 cm dbh), which contains the greatest percentage of young trees and is indicative of recent hydrologic conditions. Subcanopy trees are the driest size class in present forests, with FIs averaging 31.0 percent drier than FIs for all canopy trees. Based on results from all three sets of data, present floodplain forests average 7.4 percent drier in composition than pre-1954 forests and have the potential to become at least 31.0 percent drier in the future. An overall total change in floodplain forests to an average composition 38.4 percent drier than pre-1954 forests is expected within approximately 50 years. The greatest effects of water-level decline have occurred in tupelo-cypress swamps where forest composition has become at least 8.8 percent drier in 2004-06 than in pre-1954 years. This change indicates that a net loss of swamps has already occurred in the Apalachicola River floodplain, and further losses are expected to continue over the next 50 years. Drying of floodplain forests will result in some low bottomland hardwood forests changing in composition to high bottomland hardwood forests. The composition of high bottomland hardwoods will also change, although periodic flooding is still occurring and will continue to limit most of the floodplain to bottomland hardwood species that are adapted to at least short periods of inundation and saturation.
Xenikoudakis, G; Ersmark, E; Tison, J-L; Waits, L; Kindberg, J; Swenson, J E; Dalén, L
2015-07-01
The Scandinavian brown bear went through a major decline in population size approximately 100 years ago, due to intense hunting. After being protected, the population subsequently recovered and today numbers in the thousands. The genetic diversity in the contemporary population has been investigated in considerable detail, and it has been shown that the population consists of several subpopulations that display relatively high levels of genetic variation. However, previous studies have been unable to resolve the degree to which the demographic bottleneck impacted the contemporary genetic structure and diversity. In this study, we used mitochondrial and microsatellite DNA markers from pre- and postbottleneck Scandinavian brown bear samples to investigate the effect of the bottleneck. Simulation and multivariate analysis suggested the same genetic structure for the historical and modern samples, which are clustered into three subpopulations in southern, central and northern Scandinavia. However, the southern subpopulation appears to have gone through a marked change in allele frequencies. When comparing the mitochondrial DNA diversity in the whole population, we found a major decline in haplotype numbers across the bottleneck. However, the loss of autosomal genetic diversity was less pronounced, although a significant decline in allelic richness was observed in the southern subpopulation. Approximate Bayesian computations provided clear support for a decline in effective population size during the bottleneck, in both the southern and northern subpopulations. These results have implications for the future management of the Scandinavian brown bear because they indicate a recent loss in genetic diversity and also that the current genetic structure may have been caused by historical ecological processes rather than recent anthropogenic persecution. © 2015 John Wiley & Sons Ltd.
Light-scattering efficiency of starch acetate pigments as a function of size and packing density.
Penttilä, Antti; Lumme, Kari; Kuutti, Lauri
2006-05-20
We study theoretically the light-scattering efficiency of paper coatings made of starch acetate pigments. For the light-scattering code we use a discrete dipole approximation method. The coating layer is assumed to consists of roughly equal-sized spherical pigments packed either at a packing density of 50% (large cylindrical slabs) or at 37% or 57% (large spheres). Because the scanning electron microscope images of starch acetate samples show either a particulate or a porous structure, we model the coatings in two complementary ways. The material can be either inside the constituent spheres (particulate case) or outside of those (cheeselike, porous medium). For the packing of our spheres we use either a simulated annealing or a dropping code. We can estimate, among other things, that the ideal sphere diameter is in the range 0.25-0.4 microm.
Light-scattering efficiency of starch acetate pigments as a function of size and packing density
NASA Astrophysics Data System (ADS)
Penttilä, Antti; Lumme, Kari; Kuutti, Lauri
2006-05-01
We study theoretically the light-scattering efficiency of paper coatings made of starch acetate pigments. For the light-scattering code we use a discrete dipole approximation method. The coating layer is assumed to consists of roughly equal-sized spherical pigments packed either at a packing density of 50% (large cylindrical slabs) or at 37% or 57% (large spheres). Because the scanning electron microscope images of starch acetate samples show either a particulate or a porous structure, we model the coatings in two complementary ways. The material can be either inside the constituent spheres (particulate case) or outside of those (cheeselike, porous medium). For the packing of our spheres we use either a simulated annealing or a dropping code. We can estimate, among other things, that the ideal sphere diameter is in the range 0.25-0.4 μm.
Zebarjadi, Mona; Esfarjani, Keivan; Bian, Zhixi; Shakouri, Ali
2011-01-12
Coherent potential approximation is used to study the effect of adding doped spherical nanoparticles inside a host matrix on the thermoelectric properties. This takes into account electron multiple scatterings that are important in samples with relatively high volume fraction of nanoparticles (>1%). We show that with large fraction of uniform small size nanoparticles (∼1 nm), the power factor can be enhanced significantly. The improvement could be large (up to 450% for GaAs) especially at low temperatures when the mobility is limited by impurity or nanoparticle scattering. The advantage of doping via embedded nanoparticles compared to the conventional shallow impurities is quantified. At the optimum thermoelectric power factor, the electrical conductivity of the nanoparticle-doped material is larger than that of impurity-doped one at the studied temperature range (50-500 K) whereas the Seebeck coefficient of the nanoparticle doped material is enhanced only at low temperatures (∼50 K).
The Effects of Flocculation on the Propagation of Ultrasound in Dilute Kaolin Slurries.
Austin; Challis
1998-10-01
A broadband ultrasonic spectrometer has been used to measure ultrasonic attenuation and phase velocity dispersion as functions of frequency in kaolin suspensions over a range of solid volume fractions from phi = 0.01 to phi = 0.08 and over a pH range from 3 to 9. The Harker and Temple theory was used to simulate ultrasound propagation in the suspension, using measured slope viscosity, particle size, and size distribution. Simulated results for ultrasonic attenuation and phase velocity agree well with measured values. Both sets of results agree well and show that for volume fractions above phi approximately 0.05 attenuation and velocity dispersion increase for increasing floc size, whereas for volume fractions below phi approximately 0.05 attenuation and velocity dispersion both decrease. It is proposed that the mechanism for this change in behavior around phi approximately 0.05 involves changes in floc density and floc size distribution with phi and pH. Copyright 1998 Academic Press.
Results of Characterization and Retrieval Testing on Tank 241-C-109 Heel Solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callaway, William S.
Eight samples of heel solids from tank 241-C-109 were delivered to the 222-S Laboratory for characterization and dissolution testing. After being drained thoroughly, one-half to two-thirds of the solids were off-white to tan solids that, visually, were fairly evenly graded in size from coarse silt (30-60 μm) to medium pebbles (8-16 mm). The remaining solids were mostly strongly cemented aggregates ranging from coarse pebbles (16-32 mm) to fine cobbles (6-15 cm) in size. Solid phase characterization and chemical analysis indicated that the air-dry heel solids contained ≈58 wt% gibbsite [Al(OH){sub 3}] and ≈37 wt% natrophosphate [Na{sub 7}F(PO{sub 4}){sub 2}·19H{sub 2}O].more » The strongly cemented aggregates were mostly fine-grained gibbsite cemented with additional gibbsite. Dissolution testing was performed on two test samples. One set of tests was performed on large pieces of aggregate solids removed from the heel solids samples. The other set of dissolution tests was performed on a composite sample prepared from well-drained, air-dry heel solids that were crushed to pass a 1/4-in. sieve. The bulk density of the composite sample was 2.04 g/mL. The dissolution tests included water dissolution followed by caustic dissolution testing. In each step of the three-step water dissolution tests, a volume of water approximately equal to 3 times the initial volume of the test solids was added. In each step, the test samples were gently but thoroughly mixed for approximately 2 days at an average ambient temperature of 25 °C. The caustic dissolution tests began with the addition of sufficient 49.6 wt% NaOH to the water dissolution residues to provide ≈3.1 moles of OH for each mole of Al estimated to have been present in the starting composite sample and ≈2.6 moles of OH for each mole of Al potentially present in the starting aggregate sample. Metathesis of gibbsite to sodium aluminate was then allowed to proceed over 10 days of gentle mixing of the test samples at temperatures ranging from 26-30 °C. The metathesized sodium aluminate was then dissolved by addition of volumes of water approximately equal to 1.3 times the volumes of caustic added to the test slurries. Aluminate dissolution was allowed to proceed for 2 days at ambient temperatures of ≈29 °C. Overall, the sequential water and caustic dissolution tests dissolved and removed 80.0 wt% of the tank 241-C-109 crushed heel solids composite test sample. The 20 wt% of solids remaining after the dissolution tests were 85-88 wt% gibbsite. If the density of the residual solids was approximately equal to that of gibbsite, they represented ≈17 vol% of the initial crushed solids composite test sample. In the water dissolution tests, addition of a volume of water ≈6.9 times the initial volume of the crushed solids composite was sufficient to dissolve and recover essentially all of the natrophosphate present. The ratio of the weight of water required to dissolve the natrophosphate solids to the estimated weight of natrophosphate present was 8.51. The Environmental Simulation Program (OLI Systems, Inc., Morris Plains, New Jersey) predicts that an 8.36 w/w ratio would be required to dissolve the estimated weight of natrophosphate present in the absence of other components of the heel solids. Only minor amounts of Al-bearing solids were removed from the composite solids in the water dissolution tests. The caustic metathesis/aluminate dissolution test sequence, executed at temperatures ranging from 27-30 °C, dissolved and recovered ≈69 wt% of the gibbsite estimated to have been present in the initial crushed heel solids composite. This level of gibbsite recovery is consistent with that measured in previous scoping tests on the dissolution of gibbsite in strong caustic solutions. Overall, the sequential water and caustic dissolution tests dissolved and removed 80.3 wt% of the tank 241-C-109 aggregate solids test sample. The residual solids were 92-95 wt% gibbsite. Only a minor portion (≈4.5 wt%) of the aggregate solids was dissolved and recovered in the water dissolution test. Other than some smoothing caused by continuous mixing, the aggregates were essentially unaffected by the water dissolution tests. During the caustic metathesis/aluminate dissolution test sequence, ≈81 wt% of the gibbsite estimated to have been present in the aggregate solids was dissolved and recovered. The pieces of aggregate were significantly reduced in size but persisted as distinct pieces of solids. The increased level of gibbsite recovery, as compared to that for the crushed heel solids composite, suggests that the way the gibbsite solids and caustic solution are mixed is a key determinant of the overall efficiency of gibbsite dissolution and recovery. The liquids recovered after the caustic dissolution tests on the crushed solids composite and the aggregate solids were observed for 170 days. No precipitation of gibbsite was observed. The distribution of particle sizes in the residual solids recovered following the dissolution tests on the crushed heel solids composite was characterized. Wet sieving indicated that 21.4 wt% of the residual solids were >710 μm in size, and laser light scattering indicated that the median equivalent spherical diameter in the <710-μm solids was 35 μm. The settling behavior of the residual solids following the large-scale dissolution tests was also studied. When dispersed at a concentration of ≈1 vol% in water, ≈24 wt% of the residual solids settled at a rate >0.43 in./s; ≈68 wt% settled at rates between 0.02 and 0.43 in./s; and ≈7 wt% settled slower than 0.02 in./s.« less
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Patient size and x-ray transmission in body CT.
Ogden, Kent; Huda, Walter; Scalzetti, Ernest M; Roskopf, Marsha L
2004-04-01
Physical characteristics were obtained for 196 patients undergoing chest and abdomen computed tomography (CT) examinations. Computed tomography sections for these patients having no evident pathology were analyzed to determine patient dimensions (AP and lateral), together with the average attenuation coefficient. Patient weights ranged from approximately 3 kg to about 120 kg. For chest CT, the mean Hounsfield unit (HU) fell from about -120 HU for newborns to about -300 HU for adults. For abdominal CT, the mean HU for children and normal-sized adults was about 20 HU, but decreased to below -50 HU for adults weighing more than 100 kg. The effective photon energy and percent energy fluence transmitted through a given patient size and composition was calculated for representative x-ray spectra at 80, 100, 120, and 140 kV tube potentials. A 70-kg adult scanned at 120 kVp transmits 2.6% of the energy fluence for chest and 0.7% for abdomen CT examinations. Reducing the patient size to 10 kg increases transmission by an order of magnitude. For 70 kg patients, effective energies in body CT range from approximately 50 keV at 80 kVp to approximately 67 keV at 140 kVp; increasing patient size from 10 to 120 kg resulted in an increase in effective photon energy of approximately 4 keV. The x-ray transmission data and effective photon energy data can be used to determine CT image noise and image contrast, respectively, and information on patient size and composition can be used to determine patient doses.
Bennett, Michael D; Leitch, Ilia J; Price, H James; Johnston, J Spencer
2003-04-01
Recent genome sequencing papers have given genome sizes of 180 Mb for Drosophila melanogaster Iso-1 and 125 Mb for Arabidopsis thaliana Columbia. The former agrees with early cytochemical estimates, but numerous cytometric estimates of around 170 Mb imply that a genome size of 125 Mb for arabidopsis is an underestimate. In this study, nuclei of species pairs were compared directly using flow cytometry. Co-run Columbia and Iso-1 female gave a 2C peak for arabidopsis only approx. 15 % below that for drosophila, and 16C endopolyploid Columbia nuclei had approx. 15 % more DNA than 2C chicken nuclei (with >2280 Mb). Caenorhabditis elegans Bristol N2 (genome size approx. 100 Mb) co-run with Columbia or Iso-1 gave a 2C peak for drosophila approx. 75 % above that for 2C C. elegans, and a 2C peak for arabidopsis approx. 57 % above that for C. elegans. This confirms that 1C in drosophila is approx. 175 Mb and, combined with other evidence, leads us to conclude that the genome size of arabidopsis is not approx. 125 Mb, but probably approx. 157 Mb. It is likely that the discrepancy represents extra repeated sequences in unsequenced gaps in heterochromatic regions. Complete sequencing of the arabidopsis genome until no gaps remain at telomeres, nucleolar organizing regions or centromeres is still needed to provide the first precise angiosperm C-value as a benchmark calibration standard for plant genomes, and to ensure that no genes have been missed in arabidopsis, especially in centromeric regions, which are clearly larger than once imagined.
NASA Astrophysics Data System (ADS)
Miyazaki, Yuzo; Kawamura, Kimitaka; Sawano, Maki
2010-12-01
Size-segregated aerosol samples were collected over the western North Pacific in summer 2008 to investigate marine biological contribution to organic aerosols. The samples were analyzed for organic carbon (OC), water-soluble organic carbon (WSOC), and water-soluble organic compounds including diacids (C2-C9), ω-oxocarboxylic acids, and α-dicarbonyls as well as methanesulfonic acid (MSA). The average concentrations of OC and oxalic acid (C2) were approximately two to three times larger in marine biologically more influenced aerosols, defined by the concentrations of MSA and azelaic acid (C9), than in less influenced aerosols. WSOC, which showed a statistically significant correlation with MSA, accounted for 15-21% of total mass of the components determined in the submicrometer range of biologically more influenced aerosols. These values are comparable to those of water-insoluble organic carbon (WIOC) (˜14-23%), suggesting that organic aerosols in this region are enriched in secondary organic aerosols (SOA) linked to oceanic biological activity. In these aerosols, substantial fractions of C2-C4 diacids were found in the submicrometer size range. Positive correlations of oxalic acid with C3-C5 diacids and glyoxylic acid suggest that secondary production of oxalic acid occurs possibly in the aqueous aerosol phase via the oxidation of longer-chain diacids and glyoxylic acid in the oceanic region with higher biological productivity. We found similar concentration levels and size distributions of methylglyoxal between the two types of aerosols, suggesting that formation of oxalic acid via the oxidation of methylglyoxal from marine isoprene is insignificant in the study region.
Kremen, William S; Prom-Wormley, Elizabeth; Panizzon, Matthew S; Eyler, Lisa T; Fischl, Bruce; Neale, Michael C; Franz, Carol E; Lyons, Michael J; Pacheco, Jennifer; Perry, Michele E; Stevens, Allison; Schmitt, J Eric; Grant, Michael D; Seidman, Larry J; Thermenos, Heidi W; Tsuang, Ming T; Eisen, Seth A; Dale, Anders M; Fennema-Notestine, Christine
2010-01-15
The impact of genetic and environmental factors on human brain structure is of great importance for understanding normative cognitive and brain aging as well as neuropsychiatric disorders. However, most studies of genetic and environmental influences on human brain structure have either focused on global measures or have had samples that were too small for reliable estimates. Using the classical twin design, we assessed genetic, shared environmental, and individual-specific environmental influences on individual differences in the size of 96 brain regions of interest (ROIs). Participants were 474 middle-aged male twins (202 pairs; 70 unpaired) in the Vietnam Era Twin Study of Aging (VETSA). They were 51-59 years old, and were similar to U.S. men in their age range in terms of sociodemographic and health characteristics. We measured thickness of cortical ROIs and volume of other ROIs. On average, genetic influences accounted for approximately 70% of the variance in the volume of global, subcortical, and ventricular ROIs and approximately 45% of the variance in the thickness of cortical ROIs. There was greater variability in the heritability of cortical ROIs (0.00-0.75) as compared with subcortical and ventricular ROIs (0.48-0.85). The results did not indicate lateralized heritability differences or greater genetic influences on the size of regions underlying higher cognitive functions. The findings provide key information for imaging genetic studies and other studies of brain phenotypes and endophenotypes. Longitudinal analysis will be needed to determine whether the degree of genetic and environmental influences changes for different ROIs from midlife to later life.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Estimation of critical behavior from the density of states in classical statistical models
NASA Astrophysics Data System (ADS)
Malakis, A.; Peratzakis, A.; Fytas, N. G.
2004-12-01
We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.
Granberg, Sarah; Dahlström, Jennie; Möller, Claes; Kähäri, Kim; Danermark, Berth
2014-02-01
To review the literature in order to identify outcome measures used in research on adults with hearing loss (HL) as part of the ICF Core Sets development project, and to describe study and population characteristics of the reviewed studies. A systematic review methodology was applied using multiple databases. A comprehensive search was conducted and two search pools were created, pool I and pool II. The study population included adults (≥ 18 years of age) with HL and oral language as the primary mode of communication. 122 studies were included. Outcome measures were distinguished by 'instrument type', and 10 types were identified. In total, 246 (pool I) and 122 (pool II) different measures were identified, and only approximately 20% were extracted twice or more. Most measures were related to speech recognition. Fifty-one different questionnaires were identified. Many studies used small sample sizes, and the sex of participants was not revealed in several studies. The low prevalence of identified measures reflects a lack of consensus regarding the optimal outcome measures to use in audiology. Reflections and discussions are made in relation to small sample sizes and the lack of sex differentiation/descriptions within the included articles.
Validity of the Brunel Mood Scale for use With Malaysian Athletes.
Lan, Mohamad Faizal; Lane, Andrew M; Roy, Jolly; Hanin, Nik Azma
2012-01-01
The aim of the present study was to investigate the factorial validity of the Brunel Mood Scale for use with Malaysian athletes. Athletes (N = 1485 athletes) competing at the Malaysian Games completed the Brunel of Mood Scale (BRUMS). Confirmatory Factor Analysis (CFA) results indicated a Confirmatory Fit Index (CFI) of .90 and Root Mean Squared Error of Approximation (RMSEA) was 0.05. The CFI was below the 0.95 criterion for acceptability and the RMSEA value was within the limits for acceptability suggested by Hu and Bentler, 1999. We suggest that results provide some support for validity of the BRUMS for use with Malaysian athletes. Given the large sample size used in the present study, descriptive statistics could be used as normative data for Malaysian athletes. Key pointsFindings from the present study lend support to the validity of the BRUMS for use with Malaysian athletes.Given the size of the sample used in the present study, we suggest descriptive data be used as the normative data for researchers using the scale with Malaysian athletes.It is suggested that future research investigate the effects of cultural differences on emotional states experienced by athletes before, during and post-competition.
Validity of the Brunel Mood Scale for use With Malaysian Athletes
Lan, Mohamad Faizal; Lane, Andrew M.; Roy, Jolly; Hanin, Nik Azma
2012-01-01
The aim of the present study was to investigate the factorial validity of the Brunel Mood Scale for use with Malaysian athletes. Athletes (N = 1485 athletes) competing at the Malaysian Games completed the Brunel of Mood Scale (BRUMS). Confirmatory Factor Analysis (CFA) results indicated a Confirmatory Fit Index (CFI) of .90 and Root Mean Squared Error of Approximation (RMSEA) was 0.05. The CFI was below the 0.95 criterion for acceptability and the RMSEA value was within the limits for acceptability suggested by Hu and Bentler, 1999. We suggest that results provide some support for validity of the BRUMS for use with Malaysian athletes. Given the large sample size used in the present study, descriptive statistics could be used as normative data for Malaysian athletes. Key points Findings from the present study lend support to the validity of the BRUMS for use with Malaysian athletes. Given the size of the sample used in the present study, we suggest descriptive data be used as the normative data for researchers using the scale with Malaysian athletes. It is suggested that future research investigate the effects of cultural differences on emotional states experienced by athletes before, during and post-competition. PMID:24149128
Krishnan, Kapil; Brown, Andrew; Wayne, Leda; ...
2014-11-25
Local microstructural weak links for spall damage were investigated using three-dimensional (3-D) characterization in multicrystalline copper samples (grain size ≈ 450 µm) shocked with laser-driven plates at low pressures (2 to 4 GPa). The thickness of samples and flyer plates, approximately 1000 and 500 µm respectively, led to short pressure pulses that allowed isolating microstructure effects on local damage characteristics. Electron Backscattering Diffraction and optical microscopy were used to relate the presence, size, and shape of porosity to local microstructure. The experiments were complemented with 3-D finite element simulations of individual grain boundaries (GBs) that resulted in large damage volumesmore » using crystal plasticity coupled with a void nucleation and growth model. Results from analysis of these damage sites show that the presence of a GB-affected zone, where strain concentration occurs next to a GB, correlates strongly with damage localization at these sites, most likely due to the inability of maintaining strain compatibility across these interfaces, with additional effects due to the inclination of the GB with respect to the shock. Results indicate that strain compatibility plays an important role on intergranular spall damage in metallic materials.« less
The effect of ultrasound on casein micelle integrity.
Chandrapala, J; Martin, G J O; Zisu, B; Kentish, S E; Ashokkumar, M
2012-12-01
Samples of fresh skim milk, reconstituted micellar casein, and casein powder were sonicated at 20 kHz to investigate the effect of ultrasonication. For fresh skim milk, the average size of the remaining fat globules was reduced by approximately 10 nm after 60 min of sonication; however, the size of the casein micelles was determined to be unchanged. A small increase in soluble whey protein and a corresponding decrease in viscosity also occurred within the first few minutes of sonication, which could be attributed to the breakup of casein-whey protein aggregates. No measurable changes in free casein content could be detected in ultracentrifuged skim milk samples sonicated for up to 60 min. A small, temporary decrease in pH resulted from sonication; however, no measurable change in soluble calcium concentration was observed. Therefore, casein micelles in fresh skim milk were stable during the exposure to ultrasonication. Similar results were obtained for reconstituted micellar casein, whereas larger viscosity changes were observed as whey protein content was increased. Controlled application of ultrasound can be usefully applied to reverse process-induced protein aggregation without affecting the native state of casein micelles. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Aad, G; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimoto, G; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Almond, J; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Araque, J P; Arce, A T H; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Auerbach, B; Augsten, K; Aurousseau, M; Avolio, G; Azuelos, G; Azuma, Y; Baak, M A; Baas, A E; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Balek, P; Balli, F; Banas, E; Banerjee, Sw; Bannoura, A A E; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Bartsch, V; Bassalat, A; Basye, A; Bates, R L; Batley, J R; Battaglia, M; Battistin, M; Bauer, F; Bawa, H S; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernat, P; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bessidskaia Bylund, O; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boddy, C R; Boehler, M; Boek, T T; Bogaerts, J A; Bogdanchikov, A G; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Brendlinger, K; Brennan, A J; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, J; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Bundock, A C; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chang, P; Chapleau, B; Chapman, J D; Charfeddine, D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiefari, G; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Chouridou, S; Chow, B K B; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirkovic, P; Citron, Z H; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuciuc, C-M; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Daniells, A C; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davignon, O; Davison, A R; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dimitrievska, A; Dingfelder, J; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Do Valle Wemans, A; Dobos, D; Doglioni, C; Doherty, T; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Dudziak, F; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Dwuznik, M; Dyndal, M; Ebke, J; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Engelmann, R; Erdmann, J; Ereditato, A; Eriksson, D; Ernis, G; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Fernandez Perez, S; Ferrag, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, J; Fisher, W C; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Formica, A; Forti, A; Fortin, D; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Franconi, L; Franklin, M; Franz, S; Fraternali, M; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gianotti, F; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giorgi, F M; Giraud, P F; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Glonti, G L; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Grabas, H M X; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Grebenyuk, O G; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Groth-Jensen, J; Grout, Z J; Guan, L; Guenther, J; Guescini, F; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Gupta, S; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guttman, N; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Herbert, G H; Hernández Jiménez, Y; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hoffman, J; Hoffmann, D; Hohlfeld, M; Holmes, T R; Hong, T M; Hooft van Huysduynen, L; Hopkins, W H; Horii, Y; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Inamaru, Y; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, K E; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jungst, R M; Jussel, P; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kalderon, C W; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneda, M; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karakostas, K; Karastathis, N; Kareem, M J; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katre, A; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Kehoe, R; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kimura, N; Kind, O M; King, B T; King, M; King, R S B; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Klok, P F; Kluge, E-E; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; König, S; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuehn, S; Kugel, A; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurochkin, Y A; Kurumida, R; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; La Rosa, A; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laier, H; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lasagni Manghi, F; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmacher, M; Lehmann Miotto, G; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Lester, C M; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, B A; Long, J D; Long, R E; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lou, X; Lounis, A; Love, J; Love, P A; Lowe, A J; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeno, T; Maeno Kataoka, M; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Maiani, C; Maidantchik, C; Maier, A A; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, L; March, L; Marchand, J F; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marques, C N; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazzaferro, L; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Mechnich, J; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitani, T; Mitrevski, J; Mitsou, V A; Mitsui, S; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Mueller, T; Muenstermann, D; Munwes, Y; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Narayan, R; Nattermann, T; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira Damazio, D; Oliver Garcia, E; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, L E; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrella, S; Perrino, R; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Przysiezniak, H; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Qureshi, A; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Randle-Conde, A S; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reisin, H; Relich, M; Rembser, C; Ren, H; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Ridel, M; Rieck, P; Rieger, J; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Rodrigues, L; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, M; Rose, P; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sacerdoti, S; Saddique, A; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarkisyan-Grinbaum, E; Sarrazin, B; Sartisohn, G; Sasaki, O; Sasaki, Y; Sauvage, G; Sauvan, E; Savard, P; Savu, D O; Sawyer, C; Sawyer, L; Saxon, D H; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellers, G; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skottowe, H P; Skovpen, K Yu; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A M; South, D; Spagnolo, S; Spanò, F; Spearman, W R; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Spreitzer, T; Spurlock, B; St Denis, R D; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tannenwald, B B; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Teoh, J J; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urbaniec, D; Urquijo, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloso, F; Velz, T; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Virzi, J; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weigell, P; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; White, A; White, M J; White, R; White, S; Whiteson, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilkens, H G; Will, J Z; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wright, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wyatt, T R; Wynne, B M; Xella, S; Xiao, M; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yanush, S; Yao, L; Yao, W-M; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, F; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zurzolo, G; Zutshi, V; Zwalinski, L
The paper presents studies of Bose-Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range [Formula: see text] 100 MeV and [Formula: see text] 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 [Formula: see text]b[Formula: see text], 190 [Formula: see text]b[Formula: see text] and 12.4 nb[Formula: see text] for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. A saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. The dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.
(Fe II) 1.53 and 1.64 micron emission from pre-main-sequence stars
NASA Technical Reports Server (NTRS)
Hamann, Fred; Simon, Michal; Carr, John S.; Prato, Lisa
1994-01-01
We present flux-calibrated profiles of the (Fe II) 1.53 and 1.64 micron lines in five pre-main-sequence stars, PV Cep, V1331 Cyg, R Mon, and DG and HL Tau. The line centroids are blueshifted in all five sources, and four of the five have only blueshifted flux. In agreement with previous studies, we attribute the line asymmetries to local obscuration by dusty circumstellar disks. The absence of redshifted flux implies a minimum column density of obscuring material. The largest limit, N(sub H) greater than 3 x 10(exp 22)/sq cm, derived for V1331 Cyg, suggests disk surface densities greater than 0.05 g/sq cm and disk masses greater than 0.001 solar mass within a radius of approximately 200 AU. The narrow high-velocity lines in PV Cep, V1331 Cyg, and HL Tau require formation in well collimated winds. The maximum full opening angles of their winds range from less than 20 deg in V1331 Cyg to less than 40 deg in HL Tau. The (Fe II) data also yield estimates of the electron densities (n(sub e) approximately 10(exp 4)/cu cm), hydrogen ionization fractions (f(sub H(+)) approximately 1/3), mass-loss rates (approximately 10(exp -7) to 2 x 10(exp -6) solar mass/yr), and characteristic radii of the emitting regions (approximately 32 to approximately 155 AU). The true radial extents will be larger, and the mass-loss rates smaller, by factors of a few for the outflows with limited opening angles. In our small sample the higher mass stars have stronger lines, larger emitting regions, and greater mass-loss rates. These differences are probably limited to the scale and energetics of the envelopes, because the inferred geometries, kinematics and physical conditions are similar. The measured (Fe II) profiles samples both 'high'- and 'low'-velocity environments. Recent studies indicate that these regions have some distinct physical properties and may be spatially separate. The (Fe II) data show that similar sizes and densities can occur in both environments.
(Fe II) 1.53 and 1.64 micron emission from pre-main-sequence stars
NASA Astrophysics Data System (ADS)
Hamann, Fred; Simon, Michal; Carr, John S.; Prato, Lisa
1994-11-01
We present flux-calibrated profiles of the (Fe II) 1.53 and 1.64 micron lines in five pre-main-sequence stars, PV Cep, V1331 Cyg, R Mon, and DG and HL Tau. The line centroids are blueshifted in all five sources, and four of the five have only blueshifted flux. In agreement with previous studies, we attribute the line asymmetries to local obscuration by dusty circumstellar disks. The absence of redshifted flux implies a minimum column density of obscuring material. The largest limit, NH greater than 3 x 1022/sq cm, derived for V1331 Cyg, suggests disk surface densities greater than 0.05 g/sq cm and disk masses greater than 0.001 solar mass within a radius of approximately 200 AU. The narrow high-velocity lines in PV Cep, V1331 Cyg, and HL Tau require formation in well collimated winds. The maximum full opening angles of their winds range from less than 20 deg in V1331 Cyg to less than 40 deg in HL Tau. The (Fe II) data also yield estimates of the electron densities (ne approximately 104/cu cm), hydrogen ionization fractions (fH(+) approximately 1/3), mass-loss rates (approximately 10-7 to 2 x 10-6 solar mass/yr), and characteristic radii of the emitting regions (approximately 32 to approximately 155 AU). The true radial extents will be larger, and the mass-loss rates smaller, by factors of a few for the outflows with limited opening angles. In our small sample the higher mass stars have stronger lines, larger emitting regions, and greater mass-loss rates. These differences are probably limited to the scale and energetics of the envelopes, because the inferred geometries, kinematics and physical conditions are similar. The measured (Fe II) profiles samples both 'high'- and 'low'-velocity environments. Recent studies indicate that these regions have some distinct physical properties and may be spatially separate. The (Fe II) data show that similar sizes and densities can occur in both environments.
DOT National Transportation Integrated Search
2009-04-01
This paper studies approximations to the average length of Vehicle Routing Problems (VRP). The approximations are valuable for strategic and : planning analysis of transportation and logistics problems. The research focus is on VRP with varying numbe...
The Kinetics of Crystallization of Colloids and Proteins: A Light Scattering Study
NASA Technical Reports Server (NTRS)
McClymer, Jim
2002-01-01
Hard-sphere colloidal systems serve as model systems for aggregation, nucleation, crystallization and gelation as well as interesting systems in their own right.There is strong current interest in using colloidal systems to form photonic crystals. A major scientific thrust of NASA's microgravity research is the crystallization of proteins for structural determination. The crystallization of proteins is a complicated process that requires a great deal of trial and error experimentation. In spite of a great deal of work, "better" protein crystals cannot always be grown in microgravity and conditions for crystallization are not well understood. Crystallization of colloidal systems interacting as hard spheres and with an attractive potential induced by entropic forces have been studied in a series of static light scattering experiments. Additionally, aggregation of a protein as a function of pH has been studied using dynamic light scattering. For our experiments we used PMMA (polymethylacrylate) spherical particles interacting as hard spheres, with no attractive potential. These particles have a radius of 304 nanometers, a density of 1.22 gm/ml and an index of refraction of 1.52. A PMMA colloidal sample at a volume fraction of approximately 54% was index matched in a solution of cycloheptyl bromide (CHB) and cis-decalin. The sample is in a glass cylindrical vial that is placed in an ALV static and dynamic light scattering goniometer system. The vial is immersed in a toluene bath for index matching to minimize flair. Vigorous shaking melts any colloidal crystals initially present. The sample is illuminated with diverging laser light (632.8 nanometers) from a 4x microscope objective placed so that the beam is approximately 1 cm in diameter at the sample location. The sample is rotated about its long axis at approximately 3.5 revolutions per minute (highest speed) as the colloidal crystal system is non-ergodic. The scattered light is detected at various angles using the ALV light detection optics, which is fed into an APD detector module and linked to a computer. The scattering angle (between 12 and 160 degrees), scattering angle step size (0.1 degree minimum) and acquisition time (minimum 3 s) is set by the user.
Skrbinšek, Tomaž; Jelenčič, Maja; Waits, Lisette; Kos, Ivan; Jerina, Klemen; Trontelj, Peter
2012-02-01
The effective population size (N(e) ) could be the ideal parameter for monitoring populations of conservation concern as it conveniently summarizes both the evolutionary potential of the population and its sensitivity to genetic stochasticity. However, tracing its change through time is difficult in natural populations. We applied four new methods for estimating N(e) from a single sample of genotypes to trace temporal change in N(e) for bears in the Northern Dinaric Mountains. We genotyped 510 bears using 20 microsatellite loci and determined their age. The samples were organized into cohorts with regard to the year when the animals were born and yearly samples with age categories for every year when they were alive. We used the Estimator by Parentage Assignment (EPA) to directly estimate both N(e) and generation interval for each yearly sample. For cohorts, we estimated the effective number of breeders (N(b) ) using linkage disequilibrium, sibship assignment and approximate Bayesian computation methods and extrapolated these estimates to N(e) using the generation interval. The N(e) estimate by EPA is 276 (183-350 95% CI), meeting the inbreeding-avoidance criterion of N(e) > 50 but short of the long-term minimum viable population goal of N(e) > 500. The results obtained by the other methods are highly consistent with this result, and all indicate a rapid increase in N(e) probably in the late 1990s and early 2000s. The new single-sample approaches to the estimation of N(e) provide efficient means for including N(e) in monitoring frameworks and will be of great importance for future management and conservation. © 2012 Blackwell Publishing Ltd.
Method and apparatus for measuring spatial uniformity of radiation
Field, Halden
2002-01-01
A method and apparatus for measuring the spatial uniformity of the intensity of a radiation beam from a radiation source based on a single sampling time and/or a single pulse of radiation. The measuring apparatus includes a plurality of radiation detectors positioned on planar mounting plate to form a radiation receiving area that has a shape and size approximating the size and shape of the cross section of the radiation beam. The detectors concurrently receive portions of the radiation beam and transmit electrical signals representative of the intensity of impinging radiation to a signal processor circuit connected to each of the detectors and adapted to concurrently receive the electrical signals from the detectors and process with a central processing unit (CPU) the signals to determine intensities of the radiation impinging at each detector location. The CPU displays the determined intensities and relative intensity values corresponding to each detector location to an operator of the measuring apparatus on an included data display device. Concurrent sampling of each detector is achieved by connecting to each detector a sample and hold circuit that is configured to track the signal and store it upon receipt of a "capture" signal. A switching device then selectively retrieves the signals and transmits the signals to the CPU through a single analog to digital (A/D) converter. The "capture" signal. is then removed from the sample-and-hold circuits. Alternatively, concurrent sampling is achieved by providing an A/D converter for each detector, each of which transmits a corresponding digital signal to the CPU. The sampling or reading of the detector signals can be controlled by the CPU or level-detection and timing circuit.
Instruction manual, Optical Effects Module, Model OEM
NASA Technical Reports Server (NTRS)
1975-01-01
The Optical Effects Module Model OEM-1, a laboratory prototype instrument designed for the automated measurement of radiation transmission and scattering through optical samples, is described. The system comprises two main components: the Optical Effects Module Enclosure (OEME) and the Optical Effects Module Electronic Controller and Processor (OEMCP). The OEM is designed for operation in the near UV at approximately 2540A, corresponding to the most intense spectral line activated by the mercury discharge lamp used for illumination. The radiation from this source is detected in transmission and reflection through a number of selectable samples. The basic objective of this operation is to monitor in real time the accretion of possible contamination on the surface of these samples. The optical samples are exposed outside of the OEME proper to define exposure conditions and to separate exposure and measurement environments. Changes in the transmissivity of the sample are attributable to surface contamination or to bulk effects due to radiation. Surface contamination will increase radiation scattering due to Rayleigh-Gans effect or to other phenomena, depending on the characteristics size of the particulate contaminants. Thus, also scattering from the samples becomes a part of the measurement program.
NASA Astrophysics Data System (ADS)
Andrews, Stephen K.; Kelvin, Lee S.; Driver, Simon P.; Robotham, Aaron S. G.
2014-01-01
The 2MASS, UKIDSS-LAS, and VISTA VIKING surveys have all now observed the GAMA 9hr region in the Ks band. Here we compare the detection rates, photometry, basic size measurements, and single-component GALFIT structural measurements for a sample of 37 591 galaxies. We explore the sensitivity limits where the data agree for a variety of issues including: detection, star-galaxy separation, photometric measurements, size and ellipticity measurements, and Sérsic measurements. We find that 2MASS fails to detect at least 20% of the galaxy population within all magnitude bins, however for those that are detected we find photometry is robust (± 0.2 mag) to 14.7 AB mag and star-galaxy separation to 14.8 AB mag. For UKIDSS-LAS we find incompleteness starts to enter at a flux limit of 18.9 AB mag, star-galaxy separation is robust to 16.3 AB mag, and structural measurements are robust to 17.7 AB mag. VISTA VIKING data are complete to approximately 20.0 AB mag and structural measurements appear robust to 18.8 AB mag.
Quantitative study of fungiform papillae and taste buds on the cat's tongue.
Robinson, P P; Winkles, P A
1990-01-01
The number of fungiform papillae has been counted on the tongues of six adult cats and of kittens both at birth and aged 2 and 4 months. Papillae were sampled from different regions of the tongue, and their size and the number of taste buds they contained were determined using histological sections taken parallel to the tongue surface. There were approximately 250 fungiform papillae on the tongues of the adult cats, the papillae were most numerous at the tip of the tongue, and there was no significant difference between the number of papillae on each side. The size of the papillae increased from a mean maximum diameter of 0.28 mm at the tip of the tongue to 0.48 mm at the back; the mean number of taste buds increased correspondingly from 6.9 to 16.6. The kitten tongues had a number and distribution of fungiform papillae similar to that found in the adults. In the neonate, papillae were smaller and contained fewer taste buds; these parameters increased with the corresponding increase in tongue size in the 2- and 4-month-old kittens.
NASA Astrophysics Data System (ADS)
Lynch, James F.; Irish, James D.; Sherwood, Christopher R.; Agrawal, Yogesh C.
1994-08-01
During the winter of 1990-1991 an Acoustic BackScatter System (ABSS), five Optical Backscatterance Sensors (OBSs) and a Laser In Situ Settling Tube (LISST) were deployed in 90 m of water off the California coast for 3 months as part of the Sediment Transport Events on Shelves and Slopes (STRESS) experiment. By looking at sediment transport events with both optical (OBS) and acoustic (ABSS) sensors, one obtains information about the size of the particles transported as well as their concentration. Specifically, we employ two different methods of estimating "average particle size". First, we use vertical scattering intensity profile slopes (acoustical and optical) to infer average particle size using a Rouse profile model of the boundary layer and a Stokes law fall velocity assumption. Secondly, we use a combination of optics and acoustics to form a multifrequency (two frequency) inverse for the average particle size. These results are compared to independent observations from the LISST instrument, which measures the particle size spectrum in situ using laser diffraction techniques. Rouse profile based inversions for particle size are found to be in good agreement with the LISST results except during periods of transport event initiation, when the Rouse profile is not expected to be valid. The two frequency inverse, which is boundary layer model independent, worked reasonably during all periods, with average particle sizes correlating well with the LISST estimates. In order to further corroborate the particle size inverses from the acoustical and optical instruments, we also examined size spectra obtained from in situ sediment grab samples and water column samples (suspended sediments), as well as laboratory tank experiments using STRESS sediments. Again, good agreement is noted. The laboratory tank experiment also allowed us to study the acoustical and optical scattering law characteristics of the STRESS sediments. It is seen that, for optics, using the cross sectional area of an equivalent sphere is a very good first approximation whereas for acoustics, which is most sensitive in the region ka ˜ 1, the particle volume itself is best sensed. In concluding, we briefly interpret the history of some STRESS transport events in light of the size distribution and other information available. For one of the events "anomalous" suspended particle size distributions are noted, i.e. larger particles are seen suspended before finer ones. Speculative hypotheses for why this signature is observed are presented.
Grinshpun, Sergey A; Adhikari, Atin; Honda, Takeshi; Kim, Ki Youn; Toivola, Mika; Rao, K S Ramchander; Reponen, Tiina
2007-01-15
An indoor air purification technique, which combines unipolar ion emission and photocatalytic oxidation (promoted by a specially designed RCI cell), was investigated in two test chambers, 2.75 m3 and 24.3 m3, using nonbiological and biological challenge aerosols. The reduction in particle concentration was measured size selectively in real-time, and the Air Cleaning Factor and the Clean Air Delivery Rate (CADR) were determined. While testing with virions and bacteria, bioaerosol samples were collected and analyzed, and the microorganism survival rate was determined as a function of exposure time. We observed that the aerosol concentration decreased approximately 10 to approximately 100 times more rapidly when the purifier operated as compared to the natural decay. The data suggest that the tested portable unit operating in approximately 25 m3 non-ventilated room is capable to provide CADR-values more than twice as great than the conventional closed-loop HVAC system with a rating 8 filter. The particle removal occurred due to unipolar ion emission, while the inactivation of viable airborne microorganisms was associated with photocatalytic oxidation. Approximately 90% of initially viable MS2 viruses were inactivated resulting from 10 to 60 min exposure to the photocatalytic oxidation. Approximately 75% of viable B. subtilis spores were inactivated in 10 min, and about 90% or greater after 30 min. The biological and chemical mechanisms that led to the inactivation of stress-resistant airborne viruses and bacterial spores were reviewed.
NASA Astrophysics Data System (ADS)
Wu, Xiongwu; Brooks, Bernard R.
2011-11-01
The self-guided Langevin dynamics (SGLD) is a method to accelerate conformational searching. This method is unique in the way that it selectively enhances and suppresses molecular motions based on their frequency to accelerate conformational searching without modifying energy surfaces or raising temperatures. It has been applied to studies of many long time scale events, such as protein folding. Recent progress in the understanding of the conformational distribution in SGLD simulations makes SGLD also an accurate method for quantitative studies. The SGLD partition function provides a way to convert the SGLD conformational distribution to the canonical ensemble distribution and to calculate ensemble average properties through reweighting. Based on the SGLD partition function, this work presents a force-momentum-based self-guided Langevin dynamics (SGLDfp) simulation method to directly sample the canonical ensemble. This method includes interaction forces in its guiding force to compensate the perturbation caused by the momentum-based guiding force so that it can approximately sample the canonical ensemble. Using several example systems, we demonstrate that SGLDfp simulations can approximately maintain the canonical ensemble distribution and significantly accelerate conformational searching. With optimal parameters, SGLDfp and SGLD simulations can cross energy barriers of more than 15 kT and 20 kT, respectively, at similar rates for LD simulations to cross energy barriers of 10 kT. The SGLDfp method is size extensive and works well for large systems. For studies where preserving accessible conformational space is critical, such as free energy calculations and protein folding studies, SGLDfp is an efficient approach to search and sample the conformational space.
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
Frequency-scanning particle size spectrometer
NASA Technical Reports Server (NTRS)
Fymat, A. L. (Inventor)
1979-01-01
A particle size spectrometer having a fixed field of view within the forward light scattering cone at an angle theta sub s between approximately 100 and 200 minutes of arc (preferably at 150 minutes), a spectral range extending approximately from 0.2 to 4.0 inverse micrometers, and a spectral resolution between about 0.1 and 0.2 inverse micrometers (preferably toward the lower end of this range of spectral resolution), is employed to determine the distribution of particle sizes, independently of the chemical composition of the particles, from measurements of incident light, at each frequency, sigma (=1/lambda), and scattered light, I(sigma).
NASA Astrophysics Data System (ADS)
Steinbrink, Nicholas M. N.; Behrens, Jan D.; Mertens, Susanne; Ranitzsch, Philipp C.-O.; Weinheimer, Christian
2018-03-01
We investigate the sensitivity of the Karlsruhe Tritium Neutrino Experiment (KATRIN) to keV-scale sterile neutrinos, which are promising dark matter candidates. Since the active-sterile mixing would lead to a second component in the tritium β-spectrum with a weak relative intensity of order sin ^2θ ≲ 10^{-6}, additional experimental strategies are required to extract this small signature and to eliminate systematics. A possible strategy is to run the experiment in an alternative time-of-flight (TOF) mode, yielding differential TOF spectra in contrast to the integrating standard mode. In order to estimate the sensitivity from a reduced sample size, a new analysis method, called self-consistent approximate Monte Carlo (SCAMC), has been developed. The simulations show that an ideal TOF mode would be able to achieve a statistical sensitivity of sin ^2θ ˜ 5 × 10^{-9} at one σ , improving the standard mode by approximately a factor two. This relative benefit grows significantly if additional exemplary systematics are considered. A possible implementation of the TOF mode with existing hardware, called gated filtering, is investigated, which, however, comes at the price of a reduced average signal rate.
NASA Astrophysics Data System (ADS)
Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.
2009-04-01
Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".
The effect of between-breast differences on human milk macronutrients content.
Pines, N; Mandel, D; Mimouni, F B; Moran Lev, H; Mangel, L; Lubetzky, R
2016-07-01
Little is known about the effect of maternal handedness and preferential side of breastfeeding upon macronutrients concentration in human milk (HM). We aimed to compare macronutrients content of HM from both breasts, taking into account the self-reported preferential feeding ('dominant') breast, breast size and handedness (right versus left). We tested the null hypothesis that macronutrients content of HM is not affected by breast dominancy, breast size or maternal handedness. Fifty-seven lactating mothers were recruited. HM macronutrients were measured after mid manual expression using infrared transmission spectroscopy. Out of the 57 mothers recruited, 12 were excluded from the analyses because they brought in insufficient samples. Among the 22 who reported a size difference, 16 (73%) had a larger left breast (P<0.001). Approximately a third of women reported no breastfeeding side dominance, a third reported a right dominance and another third reported a left dominance. Breastfeeding side dominance was unaffected by either handedness or breasts size. When size asymmetry was reported (n=22) the dominant side was also the larger breast in 16 (73%) women, the smaller breast in 2 (9%) women, whereas 4 (18%) additional women with asymmetry had no preferential breastfeeding side. There were no statistically significant differences in macronutrients between the right and the left breasts. In multiple stepwise backward regression analysis, fat, carbohydrate, protein and energy contents were unaffected by maternal handedness, breast side dominance or breast size asymmetry. Macronutrients content of mid expression HM is unaffected by maternal handedness, breast size or breast side dominance.
Komilis, Dimitrios; Evangelou, Alexandros; Giannakis, Georgios; Lymperis, Constantinos
2012-03-01
In this work, the elemental content (C, N, H, S, O), the organic matter content and the calorific value of various organic components that are commonly found in the municipal solid waste stream were measured. The objective of this work was to develop an empirical equation to describe the calorific value of the organic fraction of municipal solid waste as a function of its elemental composition. The MSW components were grouped into paper wastes, food wastes, yard wastes and plastics. Sample sizes ranged from 0.2 to 0.5 kg. In addition to the above individual components, commingled municipal solid wastes were sampled from a bio-drying facility located in Crete (sample sizes ranged from 8 to 15 kg) and were analyzed for the same parameters. Based on the results of this work, an improved empirical model was developed that revealed that carbon, hydrogen and oxygen were the only statistically significant predictors of calorific value. Total organic carbon was statistically similar to total carbon for most materials in this work. The carbon to organic matter ratio of 26 municipal solid waste substrates and of 18 organic composts varied from 0.40 to 0.99. An approximate chemical empirical formula calculated for the organic fraction of commingled municipal solid wastes was C(32)NH(55)O(16). Copyright © 2011 Elsevier Ltd. All rights reserved.
An inventory of nursing education research.
Yonge, Olive J; Anderson, Marjorie; Profetto-McGrath, Joanne; Olson, Joanne K; Skillen, D Lynn; Boman, Jeanette; Ranson Ratusz, Ann; Anderson, Arnette; Slater, Linda; Day, Rene
2005-01-01
To describe nursing education research literature in terms of quality, content areas under investigation, geographic location of the research, research designs utilized, sample sizes, instruments used to collect data, and funding sources. Quantitative and qualitative research literature published between January 1991 and December 2000 were identified and classified using an author-generated Relevance Tool. 1286 articles were accepted and entered into the inventory, and an additional 22 were retained as references as they were either literature reviews or meta-analyses. Not surprisingly, 90% of nursing education research was generated in North America and Europe, the industrialised parts of the world. Of the total number of articles accepted into the inventory, 61% were quantitative research based. The bulk of the research was conducted within the confines of a course or within a program, with more than half based in educational settings. Sample sizes of the research conducted were diverse, with a bare majority using a sample between 50 and 99 participants. More than half of the studies used questionnaires to obtain data. Surprising, 80% of the research represented in these articles was not funded. The number of publications of nursing education research generated yearly stabilised at approximately 120 per year. Research programs on teaching and learning environments and practice in nursing education need to be developed. Lobbying is needed to increase funding for this type of research at national and international levels.
Raman analysis of an impacted α-GeO2-H2O mixture
NASA Astrophysics Data System (ADS)
Rosales, Ivonne; Thions-Renero, Claude; Martinez, Erendira; Agulló-Rueda, Fernando; Bucio, Lauro; Orozco, Eligio
2012-09-01
Through a Raman analysis, we detected polymorphism at high pressure on mixtures of α-GeO2 microcrystalline powder and water under impact experiments with a single-stage gas gun. The Raman measurements taken from recovered samples show two vibrational modes associated with water-related species. After the impact, the size of the α-GeO2 crystallites was approximately 10 times higher showing molten zones and a lot of porous faces. Raman examination showed some unknown peaks possibly associated with other GeO2 polymorphs detected by X-ray diffraction experiments and perhaps stabilized in the porous of the α-GeO2 crystallites.
Optical spectroscopy of interplanetary dust collected in the earth's stratosphere
NASA Technical Reports Server (NTRS)
Fraundorf, P.; Patel, R. I.; Shirck, J.; Walker, R. M.; Freeman, J. J.
1980-01-01
Optical absorption spectra of interplanetary dust particles 2-30 microns in size collected in the atmosphere at an altitude of 20 km by inertial impactors mounted on NASA U-2 aircraft are reported. Fourier transform absorption spectroscopy of crushed samples of the particles reveals a broad feature in the region 1300-800 kaysers which has also been found in meteorite and cometary dust spectra, and a weak iron crystal field absorption band at approximately 9800 kaysers, as is observed in meteorites. Work is currently in progress to separate the various components of the interplanetary dust particles in order to evaluate separately their contributions to the absorption.
A New Population Estimate for the Florida Scrub Jay on Merritt Island National Wildlife Refuge
NASA Technical Reports Server (NTRS)
Breininger, David R.
1989-01-01
The variable circular plot method was used to sample avifauna within different vegetation types determined from aerial imagery. The Florida Scrub Jay (Aphelocoma coerulescens coerulescens) population was estimated to range between 1,415 and 3,603 birds. Approximately half of the scrub and slash pine habitat appeared to be unused by Florida Scrub Jay, probably because the slash pine cover was too dense or the oak cover was too sparse. Results from the study suggest that the entire state population may be much lower than believed because the size of two of the three largest populations may have been overestimated.
Microjetting from grooved surfaces in metallic samples subjected to laser driven shocks
NASA Astrophysics Data System (ADS)
de Rességuier, T.; Lescoute, E.; Sollier, A.; Prudhomme, G.; Mercier, P.
2014-01-01
When a shock wave propagating in a solid sample reflects from a free surface, geometrical effects predominantly governed by the roughness and defects of that surface may lead to the ejection of tiny jets that may breakup into high velocity, approximately micrometer-size fragments. This process referred to as microjetting is a major safety issue for engineering applications such as pyrotechnics or armour design. Thus, it has been widely studied both experimentally, under explosive and impact loading, and theoretically. In this paper, microjetting is investigated in the specific loading conditions associated to laser shocks: very short duration of pressure application, very high strain rates, small spatial scales. Material ejection from triangular grooves in the free surface of various metallic samples is studied by combining transverse optical shadowgraphy and time-resolved velocity measurements. The influences of the main parameters (groove angle, shock pressure, nature of the metal) on jet formation and ejection velocity are quantified, and the results are compared to theoretical estimates.
Elasticity of microscale volumes of viscoelastic soft matter by cavitation rheometry
NASA Astrophysics Data System (ADS)
Pavlovsky, Leonid; Ganesan, Mahesh; Younger, John G.; Solomon, Michael J.
2014-09-01
Measurement of the elastic modulus of soft, viscoelastic liquids with cavitation rheometry is demonstrated for specimens as small as 1 μl by application of elasticity theory and experiments on semi-dilute polymer solutions. Cavitation rheometry is the extraction of the elastic modulus of a material, E, by measuring the pressure necessary to create a cavity within it [J. A. Zimberlin, N. Sanabria-DeLong, G. N. Tew, and A. J. Crosby, Soft Matter 3, 763-767 (2007)]. This paper extends cavitation rheometry in three ways. First, we show that viscoelastic samples can be approximated with the neo-Hookean model provided that the time scale of the cavity formation is measured. Second, we extend the cavitation rheometry method to accommodate cases in which the sample size is no longer large relative to the cavity dimension. Finally, we implement cavitation rheometry to show that the theory accurately measures the elastic modulus of viscoelastic samples with volumes ranging from 4 ml to as low as 1 μl.
The distribution of galaxies within the 'Great Wall'
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1992-01-01
The galaxy distribution within the 'Great Wall', the most striking feature in the first three 'slices' of the CfA redshift survey extension is examined. The Great Wall is extracted from the sample and is analyzed by counting galaxies in cells. The 'local' two-point correlation function within the Great Wall is computed and the local correlation length, is estimated 15/h Mpc, about 3 times larger than the correlation length for the entire sample. The redshift distribution of galaxies in the pencil-beam survey by Broadhurst et al. (1990) shows peaks separated about by large 'voids', at least to a redshift of about 0.3. The peaks might represent the intersections of their about 5/h Mpc pencil beams with structures similar to the Great Wall. Under this hypothesis, sampling of the Great Walls shows that l approximately 12/h Mpc is the minimum projected beam size required to detect all the 'walls' at redshifts between the peak of the selection function and the effective depth of the survey.
Robustness of survival estimates for radio-marked animals
Bunck, C.M.; Chen, C.-L.
1992-01-01
Telemetry techniques are often used to study the survival of birds and mammals; particularly whcn mark-recapture approaches are unsuitable. Both parametric and nonparametric methods to estimate survival have becn developed or modified from other applications. An implicit assumption in these approaches is that the probability of re-locating an animal with a functioning transmitter is one. A Monte Carlo study was conducted to determine the bias and variance of the Kaplan-Meier estimator and an estimator based also on the assumption of constant hazard and to eva!uate the performance of the two-sample tests associated with each. Modifications of each estimator which allow a re-Iocation probability of less than one are described and evaluated. Generallv the unmodified estimators were biased but had lower variance. At low sample sizes all estimators performed poorly. Under the null hypothesis, the distribution of all test statistics reasonably approximated the null distribution when survival was low but not when it was high. The power of the two-sample tests were similar.
Casula, M F; Concas, G; Congiu, F; Corrias, A; Loche, D; Marras, C; Spano, G
2011-11-01
Stoichiometric magnetic nanosized ferrites MFe2O4 (M = Mn, Co, Ni) were prepared in form of nearly spherical nanocrystals supported on a highly porous silica aerogel matrix, by a sol-gel procedure. X-ray diffraction and transmission electron microscopy indicate that these materials are made out of non-agglomerated ferrite nanocrystals having size in the 5-10 nm range. Investigation by Mössbauer Spectroscopy was used to gain insights on the superparamagnetic relaxation and on the inversion degree. Magnetic ordering at room temperature varies from superparamagnetic in the NiFe2O4 sample, highly blocked (approximately 70%) in the MnFe2O4 sample and nearly fully blocked in the CoFe2O4 sample. A fitting procedure of the Mössbauer data has been used in order to resolve the spectrum into the tetrahedral and octahedral components; in this way, an inversion degree of 0.68 (very close to bulk values) was obtained for 6 nm silica-supported CoFe2O4 nanocrystals.