How Large Should a Statistical Sample Be?
ERIC Educational Resources Information Center
Menil, Violeta C.; Ye, Ruili
2012-01-01
This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Single and simultaneous binary mergers in Wright-Fisher genealogies.
Melfi, Andrew; Viswanath, Divakar
2018-05-01
The Kingman coalescent is a commonly used model in genetics, which is often justified with reference to the Wright-Fisher (WF) model. Current proofs of convergence of WF and other models to the Kingman coalescent assume a constant sample size. However, sample sizes have become quite large in human genetics. Therefore, we develop a convergence theory that allows the sample size to increase with population size. If the haploid population size is N and the sample size is N 1∕3-ϵ , ϵ>0, we prove that Wright-Fisher genealogies involve at most a single binary merger in each generation with probability converging to 1 in the limit of large N. Single binary merger or no merger in each generation of the genealogy implies that the Kingman partition distribution is obtained exactly. If the sample size is N 1∕2-ϵ , Wright-Fisher genealogies may involve simultaneous binary mergers in a single generation but do not involve triple mergers in the large N limit. The asymptotic theory is verified using numerical calculations. Variable population sizes are handled algorithmically. It is found that even distant bottlenecks can increase the probability of triple mergers as well as simultaneous binary mergers in WF genealogies. Copyright © 2018 Elsevier Inc. All rights reserved.
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F
2015-01-01
Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.
Meta-analysis of genome-wide association from genomic prediction models
USDA-ARS?s Scientific Manuscript database
A limitation of many genome-wide association studies (GWA) in animal breeding is that there are many loci with small effect sizes; thus, larger sample sizes (N) are required to guarantee suitable power of detection. To increase sample size, results from different GWA can be combined in a meta-analys...
Szakács, Zoltán; Mészáros, Tamás; de Jonge, Marien I; Gyurcsányi, Róbert E
2018-05-30
Detection and counting of single virus particles in liquid samples are largely limited to narrow size distribution of viruses and purified formulations. To address these limitations, here we propose a calibration-free method that enables concurrently the selective recognition, counting and sizing of virus particles as demonstrated through the detection of human respiratory syncytial virus (RSV), an enveloped virus with a broad size distribution, in throat swab samples. RSV viruses were selectively labeled through their attachment glycoproteins (G) with fluorescent aptamers, which further enabled their identification, sizing and counting at the single particle level by fluorescent nanoparticle tracking analysis. The proposed approach seems to be generally applicable to virus detection and quantification. Moreover, it could be successfully applied to detect single RSV particles in swab samples of diagnostic relevance. Since the selective recognition is associated with the sizing of each detected particle, this method enables to discriminate viral elements linked to the virus as well as various virus forms and associations.
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.
Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S
2016-10-01
The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.
Glass frit nebulizer for atomic spectrometry
Layman, L.R.
1982-01-01
The nebuilizatlon of sample solutions Is a critical step In most flame or plasma atomic spectrometrlc methods. A novel nebulzatlon technique, based on a porous glass frit, has been Investigated. Basic operating parameters and characteristics have been studied to determine how thte new nebulizer may be applied to atomic spectrometrlc methods. The results of preliminary comparisons with pneumatic nebulizers Indicate several notable differences. The frit nebulizer produces a smaller droplet size distribution and has a higher sample transport efficiency. The mean droplet size te approximately 0.1 ??m, and up to 94% of the sample te converted to usable aerosol. The most significant limitations In the performance of the frit nebulizer are the stow sample equMbratton time and the requirement for wash cycles between samples. Loss of solute by surface adsorption and contamination of samples by leaching from the glass were both found to be limitations only In unusual cases. This nebulizer shows great promise where sample volume te limited or where measurements require long nebullzatlon times.
Measuring Endocrine-active Chemicals at ng/L Concentrations in Water
Analytical chemistry challenges for supporting aquatic toxicity research and risk assessment are many: need for low detection limits, complex sample matrices, small sample size, and equipment limitations to name a few. Certain types of potent endocrine disrupting chemicals (EDCs)...
NASA Astrophysics Data System (ADS)
Taheriniya, Shabnam; Parhizgar, Sara Sadat; Sari, Amir Hossein
2018-06-01
To study the alumina template pore size distribution as a function of Al thin film grain size distribution, porous alumina templates were prepared by anodizing sputtered aluminum thin films. To control the grain size the aluminum samples were sputtered with the rate of 0.5, 1 and 2 Å/s and the substrate temperature was either 25, 75 or 125 °C. All samples were anodized for 120 s in 1 M sulfuric acid solution kept at 1 °C while a 15 V potential was being applied. The standard deviation value for samples deposited at room temperature but with different rates is roughly 2 nm in both thin film and porous template form but it rises to approximately 4 nm with substrate temperature. Samples with the average grain size of 13, 14, 18.5 and 21 nm respectively produce alumina templates with an average pore size of 8.5, 10, 15 and 16 nm in that order which shows the average grain size limits the average pore diameter in the resulting template. Lateral correlation length and grain boundary effect are other factors that affect the pore formation process and pore size distribution by limiting the initial current density.
A low-volume cavity ring-down spectrometer for sample-limited applications
NASA Astrophysics Data System (ADS)
Stowasser, C.; Farinas, A. D.; Ware, J.; Wistisen, D. W.; Rella, C.; Wahl, E.; Crosson, E.; Blunier, T.
2014-08-01
In atmospheric and environmental sciences, optical spectrometers are used for the measurements of greenhouse gas mole fractions and the isotopic composition of water vapor or greenhouse gases. The large sample cell volumes (tens of milliliters to several liters) in commercially available spectrometers constrain the usefulness of such instruments for applications that are limited in sample size and/or need to track fast variations in the sample stream. In an effort to make spectrometers more suitable for sample-limited applications, we developed a low-volume analyzer capable of measuring mole fractions of methane and carbon monoxide based on a commercial cavity ring-down spectrometer. The instrument has a small sample cell (9.6 ml) and can selectively be operated at a sample cell pressure of 140, 45, or 20 Torr (effective internal volume of 1.8, 0.57, and 0.25 ml). We present the new sample cell design and the flow path configuration, which are optimized for small sample sizes. To quantify the spectrometer's usefulness for sample-limited applications, we determine the renewal rate of sample molecules within the low-volume spectrometer. Furthermore, we show that the performance of the low-volume spectrometer matches the performance of the standard commercial analyzers by investigating linearity, precision, and instrumental drift.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
In Search of the Largest Possible Tsunami: An Example Following the 2011 Japan Tsunami
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2012-12-01
Many tsunami hazard assessments focus on estimating the largest possible tsunami: i.e., the worst-case scenario. This is typically performed by examining historic and prehistoric tsunami data or by estimating the largest source that can produce a tsunami. We demonstrate that worst-case assessments derived from tsunami and tsunami-source catalogs are greatly affected by sampling bias. Both tsunami and tsunami sources are well represented by a Pareto distribution. It is intuitive to assume that there is some limiting size (i.e., runup or seismic moment) for which a Pareto distribution is truncated or tapered. Likelihood methods are used to determine whether a limiting size can be determined from existing catalogs. Results from synthetic catalogs indicate that several observations near the limiting size are needed for accurate parameter estimation. Accordingly, the catalog length needed to empirically determine the limiting size is dependent on the difference between the limiting size and the observation threshold, with larger catalog lengths needed for larger limiting-threshold size differences. Most, if not all, tsunami catalogs and regional tsunami source catalogs are of insufficient length to determine the upper bound on tsunami runup. As an example, estimates of the empirical tsunami runup distribution are obtained from the Miyako tide gauge station in Japan, which recorded the 2011 Tohoku-oki tsunami as the largest tsunami among 51 other events. Parameter estimation using a tapered Pareto distribution is made both with and without the Tohoku-oki event. The catalog without the 2011 event appears to have a low limiting tsunami runup. However, this is an artifact of undersampling. Including the 2011 event, the catalog conforms more to a pure Pareto distribution with no confidence in estimating a limiting runup. Estimating the size distribution of regional tsunami sources is subject to the same sampling bias. Physical attenuation mechanisms such as wave breaking likely limit the maximum tsunami runup at a particular site. However, historic and prehistoric data alone cannot determine the upper bound on tsunami runup. Because of problems endemic to sampling Pareto distributions of tsunamis and their sources, we recommend that tsunami hazard assessment be based on a specific design probability of exceedance following a pure Pareto distribution, rather than attempting to determine the worst-case scenario.
Re-estimating sample size in cluster randomised trials with active recruitment within clusters.
van Schie, S; Moerbeek, M
2014-08-30
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.
Kohigashi, Tsuyoshi; Otsuka, Yoichi; Shimazu, Ryo; Matsumoto, Takuya; Iwata, Futoshi; Kawasaki, Hideya; Arakawa, Ryuichi
2016-01-01
Mass spectrometry imaging (MSI) with ambient sampling and ionization can rapidly and easily capture the distribution of chemical components in a solid sample. Because the spatial resolution of MSI is limited by the size of the sampling area, reducing sampling size is an important goal for high resolution MSI. Here, we report the first use of a nanopipette for sampling and ionization by tapping-mode scanning probe electrospray ionization (t-SPESI). The spot size of the sampling area of a dye molecular film on a glass substrate was decreased to 6 μm on average by using a nanopipette. On the other hand, ionization efficiency increased with decreasing solvent flow rate. Our results indicate the compatibility between a reduced sampling area and the ionization efficiency using a nanopipette. MSI of micropatterns of ink on a glass and a polymer substrate were also demonstrated. PMID:28101441
Chow, Jeffrey T Y; Turkstra, Timothy P; Yim, Edmund; Jones, Philip M
2018-06-01
Although every randomized clinical trial (RCT) needs participants, determining the ideal number of participants that balances limited resources and the ability to detect a real effect is difficult. Focussing on two-arm, parallel group, superiority RCTs published in six general anesthesiology journals, the objective of this study was to compare the quality of sample size calculations for RCTs published in 2010 vs 2016. Each RCT's full text was searched for the presence of a sample size calculation, and the assumptions made by the investigators were compared with the actual values observed in the results. Analyses were only performed for sample size calculations that were amenable to replication, defined as using a clearly identified outcome that was continuous or binary in a standard sample size calculation procedure. The percentage of RCTs reporting all sample size calculation assumptions increased from 51% in 2010 to 84% in 2016. The difference between the values observed in the study and the expected values used for the sample size calculation for most RCTs was usually > 10% of the expected value, with negligible improvement from 2010 to 2016. While the reporting of sample size calculations improved from 2010 to 2016, the expected values in these sample size calculations often assumed effect sizes larger than those actually observed in the study. Since overly optimistic assumptions may systematically lead to underpowered RCTs, improvements in how to calculate and report sample sizes in anesthesiology research are needed.
Revisiting sample size: are big trials the answer?
Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J
2012-07-18
The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Effect of finite sample size on feature selection and classification: a simulation study.
Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping
2010-02-01
The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.
Improved radiation dose efficiency in solution SAXS using a sheath flow sample environment
Kirby, Nigel; Cowieson, Nathan; Hawley, Adrian M.; Mudie, Stephen T.; McGillivray, Duncan J.; Kusel, Michael; Samardzic-Boban, Vesna; Ryan, Timothy M.
2016-01-01
Radiation damage is a major limitation to synchrotron small-angle X-ray scattering analysis of biomacromolecules. Flowing the sample during exposure helps to reduce the problem, but its effectiveness in the laminar-flow regime is limited by slow flow velocity at the walls of sample cells. To overcome this limitation, the coflow method was developed, where the sample flows through the centre of its cell surrounded by a flow of matched buffer. The method permits an order-of-magnitude increase of X-ray incident flux before sample damage, improves measurement statistics and maintains low sample concentration limits. The method also efficiently handles sample volumes of a few microlitres, can increase sample throughput, is intrinsically resistant to capillary fouling by sample and is suited to static samples and size-exclusion chromatography applications. The method unlocks further potential of third-generation synchrotron beamlines to facilitate new and challenging applications in solution scattering. PMID:27917826
Measuring restriction sizes using diffusion weighted magnetic resonance imaging: a review.
Martin, Melanie
2013-01-01
This article reviews a new concept in magnetic resonance as applied to cellular and biological systems. Diffusion weighted magnetic resonance imaging can be used to infer information about restriction sizes of samples being measured. The measurements rely on the apparent diffusion coefficient changing with diffusion times as measurements move from restricted to free diffusion regimes. Pulsed gradient spin echo (PGSE) measurements are limited in the ability to shorten diffusion times and thus are limited in restriction sizes which can be probed. Oscillating gradient spin echo (OGSE) measurements could provide shorter diffusion times so smaller restriction sizes could be probed.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
A novel measure of effect size for mediation analysis.
Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken
2018-06-01
Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
Jones, Hayley E.; Martin, Richard M.; Lewis, Sarah J.; Higgins, Julian P.T.
2017-01-01
Abstract Meta‐analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1‐sided P value and a total sample size from each study (or equivalently a 2‐sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta‐analyses, allowing for comparison of results, and an example from when a meta‐analysis was not possible. PMID:28453179
ADEQUACY OF VISUALLY CLASSIFIED PARTICLE COUNT STATISTICS FROM REGIONAL STREAM HABITAT SURVEYS
Streamlined sampling procedures must be used to achieve a sufficient sample size with limited resources in studies undertaken to evaluate habitat status and potential management-related habitat degradation at a regional scale. At the same time, these sampling procedures must achi...
Flow field-flow fractionation for the analysis of nanoparticles used in drug delivery.
Zattoni, Andrea; Roda, Barbara; Borghi, Francesco; Marassi, Valentina; Reschiglian, Pierluigi
2014-01-01
Structured nanoparticles (NPs) with controlled size distribution and novel physicochemical features present fundamental advantages as drug delivery systems with respect to bulk drugs. NPs can transport and release drugs to target sites with high efficiency and limited side effects. Regulatory institutions such as the US Food and Drug Administration (FDA) and the European Commission have pointed out that major limitations to the real application of current nanotechnology lie in the lack of homogeneous, pure and well-characterized NPs, also because of the lack of well-assessed, robust routine methods for their quality control and characterization. Many properties of NPs are size-dependent, thus the particle size distribution (PSD) plays a fundamental role in determining the NP properties. At present, scanning and transmission electron microscopy (SEM, TEM) are among the most used techniques to size characterize NPs. Size-exclusion chromatography (SEC) is also applied to the size separation of complex NP samples. SEC selectivity is, however, quite limited for very large molar mass analytes such as NPs, and interactions with the stationary phase can alter NP morphology. Flow field-flow fractionation (F4) is increasingly used as a mature separation method to size sort and characterize NPs in native conditions. Moreover, the hyphenation with light scattering (LS) methods can enhance the accuracy of size analysis of complex samples. In this paper, the applications of F4-LS to NP analysis used as drug delivery systems for their size analysis, and the study of stability and drug release effects are reviewed. Copyright © 2013 Elsevier B.V. All rights reserved.
How Sample Size Affects a Sampling Distribution
ERIC Educational Resources Information Center
Mulekar, Madhuri S.; Siegel, Murray H.
2009-01-01
If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…
Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens
NASA Astrophysics Data System (ADS)
Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl
2016-01-01
As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
Frictional behaviour of sandstone: A sample-size dependent triaxial investigation
NASA Astrophysics Data System (ADS)
Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus
2017-01-01
Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.
2013-01-01
Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725
Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello
2013-10-26
Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.
Harrison, Sean; Jones, Hayley E; Martin, Richard M; Lewis, Sarah J; Higgins, Julian P T
2017-09-01
Meta-analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1-sided P value and a total sample size from each study (or equivalently a 2-sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta-analyses, allowing for comparison of results, and an example from when a meta-analysis was not possible. Copyright © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.
Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review
Morris, Tom; Gray, Laura
2017-01-01
Objectives To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Setting Any, not limited to healthcare settings. Participants Any taking part in an SW-CRT published up to March 2016. Primary and secondary outcome measures The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Results Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22–0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Conclusions Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. PMID:29146637
Pedagogical Simulation of Sampling Distributions and the Central Limit Theorem
ERIC Educational Resources Information Center
Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari
2007-01-01
Students often find the fact that a sample statistic is a random variable very hard to grasp. Even more mysterious is why a sample mean should become ever more Normal as the sample size increases. This simulation tool is meant to illustrate the process, thereby giving students some intuitive grasp of the relationship between a parent population…
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Statistical computation of tolerance limits
NASA Technical Reports Server (NTRS)
Wheeler, J. T.
1993-01-01
Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.
Problems and Limitations in Studies on Screening for Language Delay
ERIC Educational Resources Information Center
Eriksson, Marten; Westerlund, Monica; Miniscalco, Carmela
2010-01-01
This study discusses six common methodological limitations in screening for language delay (LD) as illustrated in 11 recent studies. The limitations are (1) whether the studies define a target population, (2) whether the recruitment procedure is unbiased, (3) attrition, (4) verification bias, (5) small sample size and (6) inconsistencies in choice…
NASA Astrophysics Data System (ADS)
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
The Mars Orbital Catalog of Hydrated Alteration Signatures (MOCHAS) - Initial release
NASA Astrophysics Data System (ADS)
Carter, John; OMEGA and CRISM Teams
2016-10-01
Aqueous minerals have been identified from orbit at a number of localities, and their analysis allowed refining the water story of Early Mars. They are also a main science driver when selecting current and upcoming landing sites for roving missions.Available catalogs of mineral detections exhibit a number of drawbacks such as a limited sample size (a thousand sites at most), inhomogeneous sampling of the surface and of the investigation methods, and the lack of contextual information (e.g. spatial extent, morphological context). The MOCHAS project strives to address such limitations by providing a global, detailed survey of aqueous minerals on Mars based on 10 years of data from the OMEGA and CRISM imaging spectrometers. Contextual data is provided, including deposit sizes, morphology and detailed composition when available. Sampling biases are also addressed.It will be openly distributed in GIS-ready format and will be participative. For example, it will be possible for researchers to submit requests for specific mapping of regions of interest, or add/refine mineral detections.An initial release is scheduled in Fall 2016 and will feature a two orders of magnitude increase in sample size compared to previous studies.
Improving tritium exposure reconstructions using accelerator mass spectrometry
Hunt, J. R.; Vogel, J. S.; Knezovich, J. P.
2010-01-01
Direct measurement of tritium atoms by accelerator mass spectrometry (AMS) enables rapid low-activity tritium measurements from milligram-sized samples and permits greater ease of sample collection, faster throughput, and increased spatial and/or temporal resolution. Because existing methodologies for quantifying tritium have some significant limitations, the development of tritium AMS has allowed improvements in reconstructing tritium exposure concentrations from environmental measurements and provides an important additional tool in assessing the temporal and spatial distribution of chronic exposure. Tritium exposure reconstructions using AMS were previously demonstrated for a tree growing on known levels of tritiated water and for trees exposed to atmospheric releases of tritiated water vapor. In these analyses, tritium levels were measured from milligram-sized samples with sample preparation times of a few days. Hundreds of samples were analyzed within a few months of sample collection and resulted in the reconstruction of spatial and temporal exposure from tritium releases. Although the current quantification limit of tritium AMS is not adequate to determine natural environmental variations in tritium concentrations, it is expected to be sufficient for studies assessing possible health effects from chronic environmental tritium exposure. PMID:14735274
Maturation and sexual ontogeny in the spangled emperor Lethrinus nebulosus.
Marriott, R J; Jarvis, N D C; Adams, D J; Gallash, A E; Norriss, J; Newman, S J
2010-04-01
The reproductive development and sexual ontogeny of spangled emperor Lethrinus nebulosus populations in the Ningaloo Marine Park (NMP) were investigated to obtain an improved understanding of its evolved reproductive strategy and data for fisheries management. Evidence derived from (1) analyses of histological data and sampled sex ratios with size and age, (2) the identification of residual previtellogenic oocytes in immature and mature testes sampled during the spawning season and (3) observed changes in testis internal structure with increasing fish size and age, demonstrated a non-functional protogynous hermaphroditic strategy (or functional gonochorism). All the smallest and youngest fish sampled were female until they either changed sex to male at a mean 277.5 mm total length (L(T)) and 2.3 years old or remained female and matured at a larger mean L(T) (392.1 mm) and older age (3.5 years). Gonad masses were similar for males and females over the size range sampled and throughout long reproductive lives (up to a maximum estimated age of c. 31 years), which was another correlate of functional gonochorism. That the mean L(T) at sex change and female maturity were below the current minimum legal size (MLS) limit (410 mm) demonstrated that the current MLS limit is effective for preventing recreational fishers in the NMP retaining at least half of the juvenile males and females in their landed catches.
Selbig, W.R.; Bannerman, R.; Bowman, G.
2007-01-01
Sand-sized particles (>63 ??m) in whole storm water samples collected from urban runoff have the potential to produce data with substantial bias and/or poor precision both during sample splitting and laboratory analysis. New techniques were evaluated in an effort to overcome some of the limitations associated with sample splitting and analyzing whole storm water samples containing sand-sized particles. Wet-sieving separates sand-sized particles from a whole storm water sample. Once separated, both the sieved solids and the remaining aqueous (water suspension of particles less than 63 ??m) samples were analyzed for total recoverable metals using a modification of USEPA Method 200.7. The modified version digests the entire sample, rather than an aliquot, of the sample. Using a total recoverable acid digestion on the entire contents of the sieved solid and aqueous samples improved the accuracy of the derived sediment-associated constituent concentrations. Concentration values of sieved solid and aqueous samples can later be summed to determine an event mean concentration. ?? ASA, CSSA, SSSA.
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold
2016-04-25
To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
Power and sample size for multivariate logistic modeling of unmatched case-control studies.
Gail, Mitchell H; Haneuse, Sebastien
2017-01-01
Sample size calculations are needed to design and assess the feasibility of case-control studies. Although such calculations are readily available for simple case-control designs and univariate analyses, there is limited theory and software for multivariate unconditional logistic analysis of case-control data. Here we outline the theory needed to detect scalar exposure effects or scalar interactions while controlling for other covariates in logistic regression. Both analytical and simulation methods are presented, together with links to the corresponding software.
Double asymptotics for the chi-square statistic.
Rempała, Grzegorz A; Wesołowski, Jacek
2016-12-01
Consider distributional limit of the Pearson chi-square statistic when the number of classes m n increases with the sample size n and [Formula: see text]. Under mild moment conditions, the limit is Gaussian for λ = ∞, Poisson for finite λ > 0, and degenerate for λ = 0.
ERIC Educational Resources Information Center
Muth, Chelsea; Bales, Karen L.; Hinde, Katie; Maninger, Nicole; Mendoza, Sally P.; Ferrer, Emilio
2016-01-01
Unavoidable sample size issues beset psychological research that involves scarce populations or costly laboratory procedures. When incorporating longitudinal designs these samples are further reduced by traditional modeling techniques, which perform listwise deletion for any instance of missing data. Moreover, these techniques are limited in their…
Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.B.; Cushing, K.M.; Johnson, J.W.
1982-05-01
The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less
NASA Astrophysics Data System (ADS)
Pries, V. V.; Proskuriakov, N. E.
2018-04-01
To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.
Extension of latin hypercube samples with correlated variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hora, Stephen Curtis; Helton, Jon Craig; Sallaberry, Cedric J. PhD.
2006-11-01
A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number ofmore » model evaluations.« less
A single test for rejecting the null hypothesis in subgroups and in the overall sample.
Lin, Yunzhi; Zhou, Kefei; Ganju, Jitendra
2017-01-01
In clinical trials, some patient subgroups are likely to demonstrate larger effect sizes than other subgroups. For example, the effect size, or informally the benefit with treatment, is often greater in patients with a moderate condition of a disease than in those with a mild condition. A limitation of the usual method of analysis is that it does not incorporate this ordering of effect size by patient subgroup. We propose a test statistic which supplements the conventional test by including this information and simultaneously tests the null hypothesis in pre-specified subgroups and in the overall sample. It results in more power than the conventional test when the differences in effect sizes across subgroups are at least moderately large; otherwise it loses power. The method involves combining p-values from models fit to pre-specified subgroups and the overall sample in a manner that assigns greater weight to subgroups in which a larger effect size is expected. Results are presented for randomized trials with two and three subgroups.
Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.
Kristunas, Caroline; Morris, Tom; Gray, Laura
2017-11-15
To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Globule-size distribution in injectable 20% lipid emulsions: Compliance with USP requirements.
Driscoll, David F
2007-10-01
The compliance of injectable 20% lipid emulsions with the globule-size limits in chapter 729 of the U.S. Pharmacopeia (USP) was examined. As established in chapter 729, dynamic light scattering was applied to determine mean droplet diameter (MDD), with an upper limit of 500 nm. Light obscuration was used to determine the size of fat globules found in the large-diameter tail, expressed as the volume-weighted percent fat exceeding 5 microm (PFAT(5)), with an upper limit of 0.05%. Compliance of seven different emulsions, six of which were stored in plastic bags, with USP limits was assessed. To avoid reaching coincidence limits during the application of method II from overly concentrated emulsion samples, a variable dilution scheme was used to optimize the globule-size measurements for each emulsion. One-way analysis of variance of globule-size distribution (GSD) data was conducted if any results of method I or II exceeded the respective upper limits. Most injectable lipid emulsions complied with limits established by USP chapter 729, with the exception of those of one manufacturer, which failed limits as proposed for to meet the PFAT(5) three of the emulsions tested. In contrast, all others studied (one packaged in glass and three packaged in plastic) met both criteria. Among seven injectable lipid emulsions tested for GSD, all met USP chapter 729 MDD requirements and three, all from the same manufacturer and packaged in plastic, did not meet PFAT(5) requirements.
Moran, James; Alexander, Thomas; Aalseth, Craig; Back, Henning; Mace, Emily; Overman, Cory; Seifert, Allen; Freeburg, Wilcox
2017-08-01
Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133Bq of total T activity. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of both natural and artificial T behavior in the environment. Copyright © 2017. Published by Elsevier Ltd.
Moran, James; Alexander, Thomas; Aalseth, Craig; ...
2017-01-26
Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. Here, we present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We also identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133 Bq of total T activity. Furthermore, this enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps inmore » our understanding of both natural and artificial T behavior in the environment.« less
Subjective, Autonomic, and Endocrine Reactivity during Social Stress in Children with Social Phobia
ERIC Educational Resources Information Center
Kramer, Martina; Seefeldt, Wiebke Lina; Heinrichs, Nina; Tuschen-Caffier, Brunna; Schmitz, Julian; Wolf, Oliver Tobias; Blechert, Jens
2012-01-01
Reports of exaggerated anxiety and physiological hyperreactivity to social-evaluative situations are characteristic of childhood social phobia (SP). However, laboratory research on subjective, autonomic and endocrine functioning in childhood SP is scarce, inconsistent and limited by small sample sizes, limited breadth of measurements, and the use…
Blinded and unblinded internal pilot study designs for clinical trials with count data.
Schneider, Simon; Schmidli, Heinz; Friede, Tim
2013-07-01
Internal pilot studies are a popular design feature to address uncertainties in the sample size calculations caused by vague information on nuisance parameters. Despite their popularity, only very recently blinded sample size reestimation procedures for trials with count data were proposed and their properties systematically investigated. Although blinded procedures are favored by regulatory authorities, practical application is somewhat limited by fears that blinded procedures are prone to bias if the treatment effect was misspecified in the planning. Here, we compare unblinded and blinded procedures with respect to bias, error rates, and sample size distribution. We find that both procedures maintain the desired power and that the unblinded procedure is slightly liberal whereas the actual significance level of the blinded procedure is close to the nominal level. Furthermore, we show that in situations where uncertainty about the assumed treatment effect exists, the blinded estimator of the control event rate is biased in contrast to the unblinded estimator, which results in differences in mean sample sizes in favor of the unblinded procedure. However, these differences are rather small compared to the deviations of the mean sample sizes from the sample size required to detect the true, but unknown effect. We demonstrate that the variation of the sample size resulting from the blinded procedure is in many practically relevant situations considerably smaller than the one of the unblinded procedures. The methods are extended to overdispersed counts using a quasi-likelihood approach and are illustrated by trials in relapsing multiple sclerosis. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yasaki, Hirotoshi; Yasui, Takao; Yanagida, Takeshi; Kaji, Noritada; Kanai, Masaki; Nagashima, Kazuki; Kawai, Tomoji; Baba, Yoshinobu
2017-10-11
Measuring ionic currents passing through nano- or micropores has shown great promise for the electrical discrimination of various biomolecules, cells, bacteria, and viruses. However, conventional measurements have shown there is an inherent limitation to the detectable particle volume (1% of the pore volume), which critically hinders applications to real mixtures of biomolecule samples with a wide size range of suspended particles. Here we propose a rational methodology that can detect samples with the detectable particle volume of 0.01% of the pore volume by measuring a transient current generated from the potential differences in a microfluidic bridge circuit. Our method substantially suppresses the background ionic current from the μA level to the pA level, which essentially lowers the detectable particle volume limit even for relatively large pore structures. Indeed, utilizing a microscale long pore structure (volume of 5.6 × 10 4 aL; height and width of 2.0 × 2.0 μm; length of 14 μm), we successfully detected various samples including polystyrene nanoparticles (volume: 4 aL), bacteria, cancer cells, and DNA molecules. Our method will expand the applicability of ionic current sensing systems for various mixed biomolecule samples with a wide size range, which have been difficult to measure by previously existing pore technologies.
Nixon, Richard M; Wonderling, David; Grieve, Richard D
2010-03-01
Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.
A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data
Chen, Yi-Hau
2017-01-01
Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https://github.com/roqe/T2GA. PMID:28622336
A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data.
Lai, En-Yu; Chen, Yi-Hau; Wu, Kun-Pin
2017-06-01
Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https://github.com/roqe/T2GA.
Size-selective separation of submicron particles in suspensions with ultrasonic atomization.
Nii, Susumu; Oka, Naoyoshi
2014-11-01
Aqueous suspensions containing silica or polystyrene latex were ultrasonically atomized for separating particles of a specific size. With the help of a fog involving fine liquid droplets with a narrow size distribution, submicron particles in a limited size-range were successfully separated from suspensions. Performance of the separation was characterized by analyzing the size and the concentration of collected particles with a high resolution method. Irradiation of 2.4MHz ultrasound to sample suspensions allowed the separation of particles of specific size from 90 to 320nm without regarding the type of material. Addition of a small amount of nonionic surfactant, PONPE20 to SiO2 suspensions enhanced the collection of finer particles, and achieved a remarkable increase in the number of collected particles. Degassing of the sample suspension resulted in eliminating the separation performance. Dissolved air in suspensions plays an important role in this separation. Copyright © 2014 Elsevier B.V. All rights reserved.
Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altabet, Y. Elia; Debenedetti, Pablo G., E-mail: pdebene@princeton.edu; Stillinger, Frank H.
In particle systems with cohesive interactions, the pressure-density relationship of the mechanically stable inherent structures sampled along a liquid isotherm (i.e., the equation of state of an energy landscape) will display a minimum at the Sastry density ρ{sub S}. The tensile limit at ρ{sub S} is due to cavitation that occurs upon energy minimization, and previous characterizations of this behavior suggested that ρ{sub S} is a spinodal-like limit that separates all homogeneous and fractured inherent structures. Here, we revisit the phenomenology of Sastry behavior and find that it is subject to considerable finite-size effects, and the development of the inherentmore » structure equation of state with system size is consistent with the finite-size rounding of an athermal phase transition. What appears to be a continuous spinodal-like point at finite system sizes becomes discontinuous in the thermodynamic limit, indicating behavior akin to a phase transition. We also study cavitation in glassy packings subjected to athermal expansion. Many individual expansion trajectories averaged together produce a smooth equation of state, which we find also exhibits features of finite-size rounding, and the examples studied in this work give rise to a larger limiting tension than for the corresponding landscape equation of state.« less
New Measurements of the Particle Size Distribution of Apollo 11 Lunar Soil 10084
NASA Technical Reports Server (NTRS)
McKay, D.S.; Cooper, B.L.; Riofrio, L.M.
2009-01-01
We have initiated a major new program to determine the grain size distribution of nearly all lunar soils collected in the Apollo program. Following the return of Apollo soil and core samples, a number of investigators including our own group performed grain size distribution studies and published the results [1-11]. Nearly all of these studies were done by sieving the samples, usually with a working fluid such as Freon(TradeMark) or water. We have measured the particle size distribution of lunar soil 10084,2005 in water, using a Microtrac(TradeMark) laser diffraction instrument. Details of our own sieving technique and protocol (also used in [11]). are given in [4]. While sieving usually produces accurate and reproducible results, it has disadvantages. It is very labor intensive and requires hours to days to perform properly. Even using automated sieve shaking devices, four or five days may be needed to sieve each sample, although multiple sieve stacks increases productivity. Second, sieving is subject to loss of grains through handling and weighing operations, and these losses are concentrated in the finest grain sizes. Loss from handling becomes a more acute problem when smaller amounts of material are used. While we were able to quantitatively sieve into 6 or 8 size fractions using starting soil masses as low as 50mg, attrition and handling problems limit the practicality of sieving smaller amounts. Third, sieving below 10 or 20microns is not practical because of the problems of grain loss, and smaller grains sticking to coarser grains. Sieving is completely impractical below about 5- 10microns. Consequently, sieving gives no information on the size distribution below approx.10 microns which includes the important submicrometer and nanoparticle size ranges. Finally, sieving creates a limited number of size bins and may therefore miss fine structure of the distribution which would be revealed by other methods that produce many smaller size bins.
Digital image processing of nanometer-size metal particles on amorphous substrates
NASA Technical Reports Server (NTRS)
Soria, F.; Artal, P.; Bescos, J.; Heinemann, K.
1989-01-01
The task of differentiating very small metal aggregates supported on amorphous films from the phase contrast image features inherently stemming from the support is extremely difficult in the nanometer particle size range. Digital image processing was employed to overcome some of the ambiguities in evaluating such micrographs. It was demonstrated that such processing allowed positive particle detection and a limited degree of statistical size analysis even for micrographs where by bare eye examination the distribution between particles and erroneous substrate features would seem highly ambiguous. The smallest size class detected for Pd/C samples peaks at 0.8 nm. This size class was found in various samples prepared under different evaporation conditions and it is concluded that these particles consist of 'a magic number' of 13 atoms and have cubooctahedral or icosahedral crystal structure.
NASA Technical Reports Server (NTRS)
Smith, J. L.
1983-01-01
Existing techniques were surveyed, an experimental procedure was developed, a laboratory test model was fabricated, limited data were recovered for proof of principle, and the relationship between particle size distribution and amplitude measurements was illustrated in an effort to develop a low cost, simplified optical technique for measuring particle size distributions and velocities in fluidized bed combustors and gasifiers. A He-Ne laser illuminated Rochi Rulings (range 10 to 500 lines per inch). Various samples of known particle size distributions were passed through the fringe pattern produced by the rulings. A photomultiplier tube converted light from the fringe volume to an electrical signal which was recorded using an oscilloscope and camera. The signal amplitudes were correlated against the known particle size distributions. The correlation holds true for various samples.
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses with LAI and clip harvest data to determine whether LAI can be used as a suitable proxy for aboveground standing biomass. We also compared optimal sample sizes derived from LAI data, and clip-harvest data from two different size clip harvest areas (0.1m by 1m vs. 0.1m by 2m). Sample sizes were calculated in order to estimate the mean to within a standardized level of uncertainty that will be used to guide sampling effort across all vegetation types (i.e. estimated within × 10% with 95% confidence). Finally, we employed a Semivariogram approach to determine optimal sample size and spacing.
Aaron Weiskittel; Jereme Frank; James Westfall; David Walker; Phil Radtke; David Affleck; David Macfarlane
2015-01-01
Tree biomass models are widely used but differ due to variation in the quality and quantity of data used in their development. We reviewed over 250 biomass studies and categorized them by species, location, sampled diameter distribution, and sample size. Overall, less than half of the tree species in Forest Inventory and Analysis database (FIADB) are without a...
Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin
2014-01-01
A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.
Williams, Michael S; Cao, Yong; Ebel, Eric D
2013-07-15
Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.
Sampling studies to estimate the HIV prevalence rate in female commercial sex workers.
Pascom, Ana Roberta Pati; Szwarcwald, Célia Landmann; Barbosa Júnior, Aristides
2010-01-01
We investigated sampling methods being used to estimate the HIV prevalence rate among female commercial sex workers. The studies were classified according to the adequacy or not of the sample size to estimate HIV prevalence rate and according to the sampling method (probabilistic or convenience). We identified 75 studies that estimated the HIV prevalence rate among female sex workers. Most of the studies employed convenience samples. The sample size was not adequate to estimate HIV prevalence rate in 35 studies. The use of convenience sample limits statistical inference for the whole group. It was observed that there was an increase in the number of published studies since 2005, as well as in the number of studies that used probabilistic samples. This represents a large advance in the monitoring of risk behavior practices and HIV prevalence rate in this group.
Witt, Emitt C; Wronkiewicz, David J; Shi, Honglan
2013-01-01
Fugitive road dust collection for chemical analysis and interpretation has been limited by the quantity and representativeness of samples. Traditional methods of fugitive dust collection generally focus on point-collections that limit data interpretation to a small area or require the investigator to make gross assumptions about the origin of the sample collected. These collection methods often produce a limited quantity of sample that may hinder efforts to characterize the samples by multiple geochemical techniques, preserve a reference archive, and provide a spatially integrated characterization of the road dust health hazard. To achieve a "better sampling" for fugitive road dust studies, a cyclonic fugitive dust (CFD) sampler was constructed and tested. Through repeated and identical sample collection routes at two collection heights (50.8 and 88.9 cm above the road surface), the products of the CFD sampler were characterized using particle size and chemical analysis. The average particle size collected by the cyclone was 17.9 μm, whereas particles collected by a secondary filter were 0.625 μm. No significant difference was observed between the two sample heights tested and duplicates collected at the same height; however, greater sample quantity was achieved at 50.8 cm above the road surface than at 88.9 cm. The cyclone effectively removed 94% of the particles >1 μm, which substantially reduced the loading on the secondary filter used to collect the finer particles; therefore, suction is maintained for longer periods of time, allowing for an average sample collection rate of about 2 g mi. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric
2016-01-01
Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927
Ma, Yan; Xie, Jiawen; Jin, Jing; Wang, Wei; Yao, Zhijian; Zhou, Qing; Li, Aimin; Liang, Ying
2015-07-01
A novel magnetic solid phase extraction coupled with high-performance liquid chromatography method was established to analyze polyaromatic hydrocarbons in environmental water samples. The extraction conditions, including the amount of extraction agent, extraction time, pH and the surface structure of the magnetic extraction agent, were optimized. The results showed that the amount of extraction agent and extraction time significantly influenced the extraction performance. The increase in the specific surface area, the enlargement of pore size, and the reduction of particle size could enhance the extraction performance of the magnetic microsphere. The optimized magnetic extraction agent possessed a high surface area of 1311 m(2) /g, a large pore size of 6-9 nm, and a small particle size of 6-9 μm. The limit of detection for phenanthrene and benzo[g,h,i]perylene in the developed analysis method was 3.2 and 10.5 ng/L, respectively. When applied to river water samples, the spiked recovery of phenanthrene and benzo[g,h,i]perylene ranged from 89.5-98.6% and 82.9-89.1%, respectively. Phenanthrene was detected over a concentration range of 89-117 ng/L in three water samples withdrawn from the midstream of the Huai River, and benzo[g,h,i]perylene was below the detection limit. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Code of Federal Regulations, 2014 CFR
2014-01-01
... percent, one-sided confidence limit and a sample size of n1. (2) For an energy consumption standard (ECS..., where ECS is the energy consumption standard and t is a statistic based on a 97.5 percent, one-sided...
Code of Federal Regulations, 2013 CFR
2013-01-01
... percent, one-sided confidence limit and a sample size of n1. (2) For an energy consumption standard (ECS..., where ECS is the energy consumption standard and t is a statistic based on a 97.5 percent, one-sided...
Code of Federal Regulations, 2012 CFR
2012-01-01
... percent, one-sided confidence limit and a sample size of n1. (2) For an energy consumption standard (ECS..., where ECS is the energy consumption standard and t is a statistic based on a 97.5 percent, one-sided...
ERIC Educational Resources Information Center
Fidalgo, Angel M.; Ferreres, Doris; Muniz, Jose
2004-01-01
Sample-size restrictions limit the contingency table approaches based on asymptotic distributions, such as the Mantel-Haenszel (MH) procedure, for detecting differential item functioning (DIF) in many practical applications. Within this framework, the present study investigated the power and Type I error performance of empirical and inferential…
Pediatric Disability and Caregiver Separation
ERIC Educational Resources Information Center
McCoyd, Judith L. M.; Akincigil, Ayse; Paek, Eun Kwang
2010-01-01
The evidence that the birth of a child with a disability leads to divorce or separation is equivocal, with the majority of recent research suggesting that such a birth and childrearing may be stressful, but not necessarily toxic, to the caregiver relationship. Such research has been limited by small sample sizes and nonrepresentative samples and…
Construction of the Examination Stress Scale for Adolescent Students
ERIC Educational Resources Information Center
Sung, Yao-Ting; Chao, Tzu-Yang
2015-01-01
The tools used for measuring examination stress have three main limitations: sample selected, sample sizes, and measurement contents. In this study, we constructed the Examination Stress Scale (ExamSS), and 4,717 high school students participated in this research. The results indicate that ExamSS has satisfactory reliability, construct validity,…
Densmore, Brenda K.; Rus, David L.; Moser, Matthew T.; Hall, Brent M.; Andersen, Michael J.
2016-02-04
Comparisons of concentrations and loads from EWI samples collected from different transects within a study site resulted in few significant differences, but comparisons are limited by small sample sizes and large within-transect variability. When comparing the Missouri River upstream transect to the chute inlet transect, similar results were determined in 2012 as were determined in 2008—the chute inlet affected the amount of sediment entering the chute from the main channel. In addition, the Kansas chute is potentially affecting the sediment concentration within the Missouri River main channel, but small sample size and construction activities within the chute limit the ability to fully understand either the effect of the chute in 2012 or the effect of the chute on the main channel during a year without construction. Finally, some differences in SSC were detected between the Missouri River upstream transects and the chute downstream transects; however, the effect of the chutes on the Missouri River main-channel sediment transport was difficult to isolate because of construction activities and sampling variability.
Svarcová, Silvie; Kocí, Eva; Bezdicka, Petr; Hradil, David; Hradilová, Janka
2010-09-01
The uniqueness and limited amounts of forensic samples and samples from objects of cultural heritage together with the complexity of their composition requires the application of a wide range of micro-analytical methods, which are non-destructive to the samples, because these must be preserved for potential late revision. Laboratory powder X-ray micro-diffraction (micro-XRD) is a very effective non-destructive technique for direct phase analysis of samples smaller than 1 mm containing crystal constituents. It compliments optical and electron microscopy with elemental micro-analysis, especially in cases of complicated mixtures containing phases with similar chemical composition. However, modification of X-ray diffraction to the micro-scale together with its application for very heterogeneous real samples leads to deviations from the standard procedure. Knowledge of both the limits and the phenomena which can arise during the analysis is crucial for the meaningful and proper application of the method. We evaluated basic limits of micro-XRD equipped with a mono-capillary with an exit diameter of 0.1 mm, for example the size of irradiated area, appropriate grain size, and detection limits allowing identification of given phases. We tested the reliability and accuracy of quantitative phase analysis based on micro-XRD data in comparison with conventional XRD (reflection and transmission), carrying out experiments with two-phase model mixtures simulating historic colour layers. Furthermore, we demonstrate the wide use of micro-XRD for investigation of various types of micro-samples (contact traces, powder traps, colour layers) and we show how to enhance data quality by proper choice of experiment geometry and conditions.
NASA Astrophysics Data System (ADS)
Lemal, Philipp; Geers, Christoph; Monnier, Christophe A.; Crippa, Federica; Daum, Leopold; Urban, Dominic A.; Rothen-Rutishauser, Barbara; Bonmarin, Mathias; Petri-Fink, Alke; Moore, Thomas L.
2017-04-01
Lock-in thermography (LIT) is a sensitive imaging technique generally used in engineering and materials science (e.g. detecting defects in composite materials). However, it has recently been expanded for investigating the heating power of nanomaterials, such as superparamagnetic iron oxide nanoparticles (SPIONs). Here we implement LIT as a rapid and reproducible method that can evaluate the heating potential of various sizes of SPIONs under an alternating magnetic field (AMF), as well as the limits of detection for each particle size. SPIONs were synthesized via thermal decomposition and stabilized in water via a ligand transfer process. Thermographic measurements of SPIONs were made by stimulating particles of varying sizes and increasing concentrations under an AMF. Furthermore, a commercially available SPION sample was included as an external reference. While the size dependent heating efficiency of SPIONs has been previously described, our objective was to probe the sensitivity limits of LIT. For certain size regimes it was possible to detect signals at concentrations as low as 0.1 mg Fe/mL. Measuring at different concentrations enabled a linear regression analysis and extrapolation of the limit of detection for different size nanoparticles.
Determination of thorium by fluorescent x-ray spectrometry
Adler, I.; Axelrod, J.M.
1955-01-01
A fluorescent x-ray spectrographic method for the determination of thoria in rock samples uses thallium as an internal standard. Measurements are made with a two-channel spectrometer equipped with quartz (d = 1.817 A.) analyzing crystals. Particle-size effects are minimized by grinding the sample components with a mixture of silicon carbide and aluminum and then briquetting. Analyses of 17 samples showed that for the 16 samples containing over 0.7% thoria the average error, based on chemical results, is 4.7% and the maximum error, 9.5%. Because of limitations of instrumentation, 0.2% thoria is considered the lower limit of detection. An analysis can be made in about an hour.
Lesion Quantification in Dual-Modality Mammotomography
NASA Astrophysics Data System (ADS)
Li, Heng; Zheng, Yibin; More, Mitali J.; Goodale, Patricia J.; Williams, Mark B.
2007-02-01
This paper describes a novel x-ray/SPECT dual modality breast imaging system that provides 3D structural and functional information. While only a limited number of views on one side of the breast can be acquired due to mechanical and time constraints, we developed a technique to compensate for the limited angle artifact in reconstruction images and accurately estimate both the lesion size and radioactivity concentration. Various angular sampling strategies were evaluated using both simulated and experimental data. It was demonstrated that quantification of lesion size to an accuracy of 10% and quantification of radioactivity to an accuracy of 20% are feasible from limited-angle data acquired with clinically practical dosage and acquisition time
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-01-01
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150
40 CFR 761.353 - Second level of sample selection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...
40 CFR 761.353 - Second level of sample selection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...
40 CFR 761.353 - Second level of sample selection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... reduction is to limit the amount of time required to manually cut up larger particles of the waste to pass through a 9.5 millimeter (mm) screen. (a) Selecting a portion of the subsample for particle size reduction... table to select one of these quarters. (b) Reduction of the particle size by the use of a 9.5 mm screen...
A comprehensive and scalable database search system for metaproteomics.
Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W
2016-08-16
Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.
Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin
2017-08-17
A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.
Design of an occulter testbed at flight Fresnel numbers
NASA Astrophysics Data System (ADS)
Sirbu, Dan; Kasdin, N. Jeremy; Kim, Yunjong; Vanderbei, Robert J.
2015-01-01
An external occulter is a spacecraft flown along the line-of-sight of a space telescope to suppress starlight and enable high-contrast direct imaging of exoplanets. Laboratory verification of occulter designs is necessary to validate the optical models used to design and predict occulter performance. At Princeton, we are designing and building a testbed that allows verification of scaled occulter designs whose suppressed shadow is mathematically identical to that of space occulters. Here, we present a sample design operating at a flight Fresnel number and is thus representative of a realistic space mission. We present calculations of experimental limits arising from the finite size and propagation distance available in the testbed, limitations due to manufacturing feature size, and non-ideal input beam. We demonstrate how the testbed is designed to be feature-size limited, and provide an estimation of the expected performance.
ERIC Educational Resources Information Center
Wiley, Kristofor R.
2013-01-01
Many of the social and emotional needs that have historically been associated with gifted students have been questioned on the basis of recent empirical evidence. Research on the topic, however, is often limited by sample size, selection bias, or definition. This study addressed these limitations by applying linear regression methodology to data…
A cavitation transition in the energy landscape of simple cohesive liquids and glasses
NASA Astrophysics Data System (ADS)
Altabet, Y. Elia; Stillinger, Frank H.; Debenedetti, Pablo G.
2016-12-01
In particle systems with cohesive interactions, the pressure-density relationship of the mechanically stable inherent structures sampled along a liquid isotherm (i.e., the equation of state of an energy landscape) will display a minimum at the Sastry density ρS. The tensile limit at ρS is due to cavitation that occurs upon energy minimization, and previous characterizations of this behavior suggested that ρS is a spinodal-like limit that separates all homogeneous and fractured inherent structures. Here, we revisit the phenomenology of Sastry behavior and find that it is subject to considerable finite-size effects, and the development of the inherent structure equation of state with system size is consistent with the finite-size rounding of an athermal phase transition. What appears to be a continuous spinodal-like point at finite system sizes becomes discontinuous in the thermodynamic limit, indicating behavior akin to a phase transition. We also study cavitation in glassy packings subjected to athermal expansion. Many individual expansion trajectories averaged together produce a smooth equation of state, which we find also exhibits features of finite-size rounding, and the examples studied in this work give rise to a larger limiting tension than for the corresponding landscape equation of state.
Creep of quartz by dislocation and grain boundary processes
NASA Astrophysics Data System (ADS)
Fukuda, J. I.; Holyoke, C. W., III; Kronenberg, A. K.
2015-12-01
Wet polycrystalline quartz aggregates deformed at temperatures T of 600°-900°C and strain rates of 10-4-10-6 s-1 at a confining pressure Pc of 1.5 GPa exhibit plasticity at low T, governed by dislocation glide and limited recovery, and grain size-sensitive creep at high T, governed by diffusion and sliding at grain boundaries. Quartz aggregates were HIP-synthesized, subjecting natural milky quartz powder to T=900°C and Pc=1.5 GPa, and grain sizes (2 to 25 mm) were varied by annealing at these conditions for up to 10 days. Infrared absorption spectra exhibit a broad OH band at 3400 cm-1 due to molecular water inclusions with a calculated OH content (~4000 ppm, H/106Si) that is unchanged by deformation. Rate-stepping experiments reveal different stress-strain rate functions at different temperatures and grain sizes, which correspond to differing stress-temperature sensitivities. At 600-700°C and grain sizes of 5-10 mm, flow law parameters compare favorably with those for basal plasticity and dislocation creep of wet quartzites (effective stress exponents n of 3 to 6 and activation enthalpy H* ~150 kJ/mol). Deformed samples show undulatory extinction, limited recrystallization, and c-axis maxima parallel to the shortening direction. Similarly fine-grained samples deformed at 800°-900°C exhibit flow parameters n=1.3-2.0 and H*=135-200 kJ/mol corresponding to grain size-sensitive Newtonian creep. Deformed samples show some undulatory extinction and grain sizes change by recrystallization; however, grain boundary deformation processes are indicated by the low value of n. Our experimental results for grain size-sensitive creep can be compared with models of grain boundary diffusion and grain boundary sliding using measured rates of silicon grain boundary diffusion. While many quartz mylonites show microstructural and textural evidence for dislocation creep, results for grain size-sensitive creep may apply to very fine-grained (<10 mm) quartz mylonites.
NASA Technical Reports Server (NTRS)
Mair, R. W.; Sen, P. N.; Hurlimann, M. D.; Patz, S.; Cory, D. G.; Walsworth, R. L.
2002-01-01
We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Pade approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Pade interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Pade length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).
Mair, R W; Sen, P N; Hürlimann, M D; Patz, S; Cory, D G; Walsworth, R L
2002-06-01
We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Padé approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Padé interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Padé length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).
Laboratory Spectrometer for Wear Metal Analysis of Engine Lubricants.
1986-04-01
analysis, the acid digestion technique for sample pretreatment is the best approach available to date because of its relatively large sample size (1000...microliters or more). However, this technique has two major shortcomings limiting its application: (1) it requires the use of hydrofluoric acid (a...accuracy. Sample preparation including filtration or acid digestion may increase analysis times by 20 minutes or more. b. Repeatability In the analysis
Code of Federal Regulations, 2010 CFR
2010-10-01
... chromatograph. Detection limit: 0.04 ppm. Recommended air volume and sampling rate: 10 liter at 0.2 liter/min. 1... tube must be less than one inch of mercury at a flow rate of one liter per minute. 3.3. Gas... passed through any hose or tubing before entering the charcoal tube. 5.3.5. A sample size of 10 liters is...
Code of Federal Regulations, 2011 CFR
2011-10-01
... chromatograph. Detection limit: 0.04 ppm. Recommended air volume and sampling rate: 10 liter at 0.2 liter/min. 1... tube must be less than one inch of mercury at a flow rate of one liter per minute. 3.3. Gas... passed through any hose or tubing before entering the charcoal tube. 5.3.5. A sample size of 10 liters is...
A comparison of fitness-case sampling methods for genetic programming
NASA Astrophysics Data System (ADS)
Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel
2017-11-01
Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.
USDA-ARS?s Scientific Manuscript database
Although many near infrared (NIR) spectrometric calibrations exist for a variety of components in soy, current calibration methods are often limited by either a small sample size on which the calibrations are based or a wide variation in sample preparation and measurement methods, which yields unrel...
Intellectual Abilities in a Large Sample of Children with Velo-Cardio-Facial Syndrome: An Update
ERIC Educational Resources Information Center
De Smedt, Bert; Devriendt, K.; Fryns, J. -P.; Vogels, A.; Gewillig, M.; Swillen, A.
2007-01-01
Background: Learning disabilities are one of the most consistently reported features in Velo-Cardio-Facial Syndrome (VCFS). Earlier reports on IQ in children with VCFS were, however, limited by small sample sizes and ascertainment biases. The aim of the present study was therefore to replicate these earlier findings and to investigate intellectual…
Decision and function problems based on boson sampling
NASA Astrophysics Data System (ADS)
Nikolopoulos, Georgios M.; Brougham, Thomas
2016-07-01
Boson sampling is a mathematical problem that is strongly believed to be intractable for classical computers, whereas passive linear interferometers can produce samples efficiently. So far, the problem remains a computational curiosity, and the possible usefulness of boson-sampling devices is mainly limited to the proof of quantum supremacy. The purpose of this work is to investigate whether boson sampling can be used as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. After the definition of a rather general theoretical framework for the design of such problems, we discuss their solution by means of a brute-force numerical approach, as well as by means of nonboson samplers. Moreover, we estimate the sample sizes required for their solution by passive linear interferometers, and it is shown that they are independent of the size of the Hilbert space.
Van Berkel, Gary J.
2015-10-06
A system and method for analyzing a chemical composition of a specimen are described. The system can include at least one pin; a sampling device configured to contact a liquid with a specimen on the at least one pin to form a testing solution; and a stepper mechanism configured to move the at least one pin and the sampling device relative to one another. The system can also include an analytical instrument for determining a chemical composition of the specimen from the testing solution. In particular, the systems and methods described herein enable chemical analysis of specimens, such as tissue, to be evaluated in a manner that the spatial-resolution is limited by the size of the pins used to obtain tissue samples, not the size of the sampling device used to solubilize the samples coupled to the pins.
Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland
2017-01-01
For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. PMID:27639623
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
NMR/MRI with hyperpolarized gas and high Tc SQUID
Schlenga, Klaus; de Souza, Ricardo E.; Wong-Foy, Annjoe; Clarke, John; Pines, Alexander
2000-01-01
A method and apparatus for the detection of nuclear magnetic resonance (NMR) signals and production of magnetic resonance imaging (MRI) from samples combines the use of hyperpolarized inert gases to enhance the NMR signals from target nuclei in a sample and a high critical temperature (Tc) superconducting quantum interference device (SQUID) to detect the NMR signals. The system operates in static magnetic fields of 3 mT or less (down to 0.1 mT), and at temperatures from liquid nitrogen (77K) to room temperature. Sample size is limited only by the size of the magnetic field coils and not by the detector. The detector is a high Tc SQUID magnetometer designed so that the SQUID detector can be very close to the sample, which can be at room temperature.
Estimation of the bottleneck size in Florida panthers
Culver, M.; Hedrick, P.W.; Murphy, K.; O'Brien, S.; Hornocker, M.G.
2008-01-01
We have estimated the extent of genetic variation in museum (1890s) and contemporary (1980s) samples of Florida panthers Puma concolor coryi for both nuclear loci and mtDNA. The microsatellite heterozygosity in the contemporary sample was only 0.325 that in the museum samples although our sample size and number of loci are limited. Support for this estimate is provided by a sample of 84 microsatellite loci in contemporary Florida panthers and Idaho pumas Puma concolor hippolestes in which the contemporary Florida panther sample had only 0.442 the heterozygosity of Idaho pumas. The estimated diversities in mtDNA in the museum and contemporary samples were 0.600 and 0.000, respectively. Using a population genetics approach, we have estimated that to reduce either the microsatellite heterozygosity or the mtDNA diversity this much (in a period of c. 80years during the 20th century when the numbers were thought to be low) that a very small bottleneck size of c. 2 for several generations and a small effective population size in other generations is necessary. Using demographic data from Yellowstone pumas, we estimated the ratio of effective to census population size to be 0.315. Using this ratio, the census population size in the Florida panthers necessary to explain the loss of microsatellite variation was c .41 for the non-bottleneck generations and 6.2 for the two bottleneck generations. These low bottleneck population sizes and the concomitant reduced effectiveness of selection are probably responsible for the high frequency of several detrimental traits in Florida panthers, namely undescended testicles and poor sperm quality. The recent intensive monitoring both before and after the introduction of Texas pumas in 1995 will make the recovery and genetic restoration of Florida panthers a classic study of an endangered species. Our estimates of the bottleneck size responsible for the loss of genetic variation in the Florida panther completes an unknown aspect of this account. ?? 2008 The Authors. Journal compilation ?? 2008 The Zoological Society of London.
Coalescence computations for large samples drawn from populations of time-varying sizes
Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek
2017-01-01
We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404
Combining the boundary shift integral and tensor-based morphometry for brain atrophy estimation
NASA Astrophysics Data System (ADS)
Michalkiewicz, Mateusz; Pai, Akshay; Leung, Kelvin K.; Sommer, Stefan; Darkner, Sune; Sørensen, Lauge; Sporring, Jon; Nielsen, Mads
2016-03-01
Brain atrophy from structural magnetic resonance images (MRIs) is widely used as an imaging surrogate marker for Alzheimers disease. Their utility has been limited due to the large degree of variance and subsequently high sample size estimates. The only consistent and reasonably powerful atrophy estimation methods has been the boundary shift integral (BSI). In this paper, we first propose a tensor-based morphometry (TBM) method to measure voxel-wise atrophy that we combine with BSI. The combined model decreases the sample size estimates significantly when compared to BSI and TBM alone.
Neuropsychological impairments in panic disorder: a systematic review.
O'Sullivan, Kate; Newman, Emily F
2014-01-01
There is a growing body of literature investigating the neuropsychological profile of panic disorder (PD), some of which suggests potential cognitive dysfunction. This paper systematically reviews the existing literature on neuropsychological performance in PD. PsycINFO, EMBASE, MEDLINE and PsycARTICLES databases were searched to identify articles reporting on neuropsychological function in PD published in English during the time period 1980 to March 2012. 14 studies were identified. There was limited support for impairment in short term memory among individuals with PD, although this was not found across all studies. Overall, the reviewed studies did not support the presence of impairment in other areas of cognitive functioning, including executive function, long term memory, visuospatial or perceptual abilities and working memory. Studies with samples of fewer than 15 participants per group were excluded from this review. A limited amount of research has been published on this topic and small sample sizes (under 25 per group) have been used by many studies. Therefore, the current review is based on a small number of studies with limited power. There is limited evidence of specific neuropsychological impairments in participants with PD. Impairments in short term memory warrant further investigation to establish their relevance to clinical practice. Larger sample sizes and appropriate statistical adjustment for multiple comparisons in future studies is highly recommended. Copyright © 2014 Elsevier B.V. All rights reserved.
Lanata, C F; Black, R E
1991-01-01
Traditional survey methods, which are generally costly and time-consuming, usually provide information at the regional or national level only. The utilization of lot quality assurance sampling (LQAS) methodology, developed in industry for quality control, makes it possible to use small sample sizes when conducting surveys in small geographical or population-based areas (lots). This article describes the practical use of LQAS for conducting health surveys to monitor health programmes in developing countries. Following a brief description of the method, the article explains how to build a sample frame and conduct the sampling to apply LQAS under field conditions. A detailed description of the procedure for selecting a sampling unit to monitor the health programme and a sample size is given. The sampling schemes utilizing LQAS applicable to health surveys, such as simple- and double-sampling schemes, are discussed. The interpretation of the survey results and the planning of subsequent rounds of LQAS surveys are also discussed. When describing the applicability of LQAS in health surveys in developing countries, the article considers current limitations for its use by health planners in charge of health programmes, and suggests ways to overcome these limitations through future research. It is hoped that with increasing attention being given to industrial sampling plans in general, and LQAS in particular, their utilization to monitor health programmes will provide health planners in developing countries with powerful techniques to help them achieve their health programme targets.
NASA Astrophysics Data System (ADS)
Krupka, Jerzy; Aleshkevych, Pavlo; Salski, Bartlomiej; Kopyt, Pawel
2018-02-01
The mode of uniform precession, or Kittel mode, in a magnetized ferromagnetic sphere, has recently been proven to be the magnetic plasmon resonance. In this paper we show how to apply the electrodynamic model of the magnetic plasmon resonance for accurate measurements of the ferromagnetic resonance linewidth ΔH. Two measurement methods are presented. The first one employs Q-factor measurements of the magnetic plasmon resonance coupled to the resonance of an empty metallic cavity. Such coupled modes are known as magnon-polariton modes, i.e. hybridized modes between the collective spin excitation and the cavity excitation. The second one employs direct Q-factor measurements of the magnetic plasmon resonance in a filter setup with two orthogonal semi-loops used for coupling. Q-factor measurements are performed employing a vector network analyser. The methods presented in this paper allow one to extend the measurement range of the ferromagnetic resonance linewidth ΔH well beyond the limits of the commonly used measurement standards in terms of the size of the samples and the lowest measurable linewidths. Samples that can be measured with the newly proposed methods may have larger size as compared to the size of samples that were used in the standard methods restricted by the limits of perturbation theory.
Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo
2015-02-01
Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.
Dark field imaging system for size characterization of magnetic micromarkers
NASA Astrophysics Data System (ADS)
Malec, A.; Haiden, C.; Kokkinis, G.; Keplinger, F.; Giouroudi, I.
2017-05-01
In this paper we demonstrate a dark field video imaging system for the detection and size characterization of individual magnetic micromarkers suspended in liquid and the detection of pathogens utilizing magnetically labelled E.coli. The system follows dynamic processes and interactions of moving micro/nano objects close to or below the optical resolution limit, and is especially suitable for small sample volumes ( 10 μl). The developed detection method can be used to obtain clinical information about liquid contents when an additional biological protocol is provided, i.e., binding of microorganisms (e.g. E.coli) to specific magnetic markers. Some of the major advantages of our method are the increased sizing precision in the micro- and nano-range as well as the setup's simplicity making it a perfect candidate for miniaturized devices. Measurements can thus be carried out in a quick, inexpensive, and compact manner. A minor limitation is that the concentration range of micromarkers in a liquid sample needs to be adjusted in such a manner that the number of individual particles in the microscope's field of view is sufficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.
2014-04-15
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less
NASA Astrophysics Data System (ADS)
Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie
2016-01-01
It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.
Scaling ice microstructures from the laboratory to nature: cryo-EBSD on large samples.
NASA Astrophysics Data System (ADS)
Prior, David; Craw, Lisa; Kim, Daeyeong; Peyroux, Damian; Qi, Chao; Seidemann, Meike; Tooley, Lauren; Vaughan, Matthew; Wongpan, Pat
2017-04-01
Electron backscatter diffraction (EBSD) has extended significantly our ability to conduct detailed quantitative microstructural investigations of rocks, metals and ceramics. EBSD on ice was first developed in 2004. Techniques have improved significantly in the last decade and EBSD is now becoming more common in the microstructural analysis of ice. This is particularly true for laboratory-deformed ice where, in some cases, the fine grain sizes exclude the possibility of using a thin section of the ice. Having the orientations of all axes (rather than just the c-axis as in an optical method) yields important new information about ice microstructure. It is important to examine natural ice samples in the same way so that we can scale laboratory observations to nature. In the case of ice deformation, higher strain rates are used in the laboratory than those seen in nature. These are achieved by increasing stress and/or temperature and it is important to assess that the microstructures produced in the laboratory are comparable with those observed in nature. Natural ice samples are coarse grained. Glacier and ice sheet ice has a grain size from a few mm up to several cm. Sea and lake ice has grain sizes of a few cm to many metres. Thus extending EBSD analysis to larger sample sizes to include representative microstructures is needed. The chief impediments to working on large ice samples are sample exchange, limitations on stage motion and temperature control. Large ice samples cannot be transferred through a typical commercial cryo-transfer system that limits sample sizes. We transfer through a nitrogen glove box that encloses the main scanning electron microscope (SEM) door. The nitrogen atmosphere prevents the cold stage and the sample from becoming covered in frost. Having a long optimal working distance for EBSD (around 30mm for the Otago cryo-EBSD facility) , by moving the camera away from the pole piece, enables the stage to move without crashing into either the EBSD camera or the SEM pole piece (final lens). In theory a sample up to 100mm perpendicular to the tilt axis by 150mm parallel to the tilt axis can be analysed. In practice, the motion of our stage is restricted to maximum dimensions of 100 by 50mm by a conductive copper braid on our cold stage. Temperature control becomes harder as the samples become larger. If the samples become too warm then they will start to sublime and the quality of EBSD data will reduce. Large samples need to be relatively thin ( 5mm or less) so that conduction of heat to the cold stage is more effective at keeping the surface temperature low. In the Otago facility samples of up to 40mm by 40mm present little problem and can be analysed for several hours without significant sublimation. Larger samples need more care, e.g. fast sample transfer to keep the sample very cold. The largest samples we work on routinely are 40 by 60mm in size. We will show examples of EBSD data from glacial ice and sea ice from Antarctica and from large laboratory ice samples.
Brownell, Sara E.; Kloser, Matthew J.; Fukami, Tadashi; Shavelson, Richard J.
2013-01-01
The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course. PMID:24358380
Brownell, Sara E; Kloser, Matthew J; Fukami, Tadashi; Shavelson, Richard J
2013-01-01
The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.
Hirose, Makoto; Shimomura, Kei; Suzuki, Akihiro; Burdet, Nicolas; Takahashi, Yukio
2016-05-30
The sample size must be less than the diffraction-limited focal spot size of the incident beam in single-shot coherent X-ray diffraction imaging (CXDI) based on a diffract-before-destruction scheme using X-ray free electron lasers (XFELs). This is currently a major limitation preventing its wider applications. We here propose multiple defocused CXDI, in which isolated objects are sequentially illuminated with a divergent beam larger than the objects and the coherent diffraction pattern of each object is recorded. This method can simultaneously reconstruct both objects and a probe from the coherent X-ray diffraction patterns without any a priori knowledge. We performed a computer simulation of the prposed method and then successfully demonstrated it in a proof-of-principle experiment at SPring-8. The prposed method allows us to not only observe broad samples but also characterize focused XFEL beams.
Mächtle, W
1999-01-01
Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040
Jeong, Jee Yeon; Park, Jong Su; Kim, Pan Gyi
2016-06-01
Shipbuilding involves intensive welding activities, and welders are exposed to a variety of metal fumes, including manganese, that may be associated with neurological impairments. This study aimed to characterize total and size-fractionated manganese exposure resulting from welding operations in shipbuilding work areas. In this study, we characterized manganese-containing particulates with an emphasis on total mass (n = 86, closed-face 37-mm cassette samplers) and particle size-selective mass concentrations (n = 86, 8-stage cascade impactor samplers), particle size distributions, and a comparison of exposure levels determined using personal cassette and impactor samplers. Our results suggest that 67.4% of all samples were above the current American Conference of Governmental Industrial Hygienists manganese threshold limit value of 100 μg/m(3) as inhalable mass. Furthermore, most of the particles containing manganese in the welding process were of the size of respirable particulates, and 90.7% of all samples exceeded the American Conference of Governmental Industrial Hygienists threshold limit value of 20 μg/m(3) for respirable manganese. The concentrations measured with the two sampler types (cassette: total mass; impactor: inhalable mass) were significantly correlated (r = 0.964, p < 0.001), but the total concentration obtained using cassette samplers was lower than the inhalable concentration of impactor samplers.
SIproc: an open-source biomedical data processing platform for large hyperspectral images.
Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David
2017-04-10
There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.
"Magnitude-based inference": a statistical review.
Welsh, Alan H; Knight, Emma J
2015-04-01
We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.
The local environment of ice particles in arctic mixed-phase clouds
NASA Astrophysics Data System (ADS)
Schlenczek, Oliver; Fugal, Jacob P.; Schledewitz, Waldemar; Borrmann, Stephan
2015-04-01
During the RACEPAC field campaign in April and May 2014, research flights were made with the Polar 5 and Polar 6 aircraft from the Alfred Wegener Institute in Arctic clouds near Inuvik, Northwest Territories, Canada. One flight with the Polar 6 aircraft, done on May 16, 2014, flew under precipitating, stratiform, mid-level clouds with several penetrations through cloud base. Measurements with HALOHolo, an airborne digital in-line holographic instrument for cloud particles, show ice particles in a field of other cloud particles in a local three-dimensional sample volume (~14x19x130 mm3 or ~35 cm^3). Each holographic sample volume is a snapshot of a 3-dimensional piece of cloud at the cm-scale with typically thousands of cloud droplets per sample volume, so each sample volume yields a statistically significant droplet size distribution. Holograms are recorded at a rate of six times per second, which provides one volume sample approx. every 12 meters along the flight path. The size resolution limit for cloud droplets is better than 1 µm due to advanced sizing algorithms. Shown are preliminary results of, (1) the ice/liquid water partitioning at the cloud base and the distribution of water droplets around each ice particle, and (2) spatial and temporal variability of the cloud droplet size distributions at cloud base.
Laboratory theory and methods for sediment analysis
Guy, Harold P.
1969-01-01
The diverse character of fluvial sediments makes the choice of laboratory analysis somewhat arbitrary and the pressing of sediment samples difficult. This report presents some theories and methods used by the Water Resources Division for analysis of fluvial sediments to determine the concentration of suspended-sediment samples and the particle-size distribution of both suspended-sediment and bed-material samples. Other analyses related to these determinations may include particle shape, mineral content, and specific gravity, the organic matter and dissolved solids of samples, and the specific weight of soils. The merits and techniques of both the evaporation and filtration methods for concentration analysis are discussed. Methods used for particle-size analysis of suspended-sediment samples may include the sieve pipet, the VA tube-pipet, or the BW tube-VA tube depending on the equipment available, the concentration and approximate size of sediment in the sample, and the settling medium used. The choice of method for most bed-material samples is usually limited to procedures suitable for sand or to some type of visual analysis for large sizes. Several tested forms are presented to help insure a well-ordered system in the laboratory to handle the samples, to help determine the kind of analysis required for each, to conduct the required processes, and to assist in the required computations. Use of the manual should further 'standardize' methods of fluvial sediment analysis among the many laboratories and thereby help to achieve uniformity and precision of the data.
ERIC Educational Resources Information Center
Liu, David; Wellman, Henry M.; Tardif, Twila; Sabbagh, Mark A.
2008-01-01
Theory of mind is claimed to develop universally among humans across cultures with vastly different folk psychologies. However, in the attempt to test and confirm a claim of universality, individual studies have been limited by small sample sizes, sample specificities, and an overwhelming focus on Anglo-European children. The current meta-analysis…
Mark J. Ducey; Jeffrey H. Gove; Harry T. Valentine
2008-01-01
Perpendicular distance sampling (PDS) is a fast probability-proportional-to-size method for inventory of downed wood. However, previous development of PDS had limited the method to estimating only one variable (such as volume per hectare, or surface area per hectare) at a time. Here, we develop a general design-unbiased estimator for PDS. We then show how that...
Leaching behaviour of bottom ash from RDF high-temperature gasification plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gori, M., E-mail: manuela.gori@dicea.unifi.it; Pifferi, L.; Sirini, P.
2011-07-15
This study investigated the physical properties, the chemical composition and the leaching behaviour of two bottom ash (BA) samples from two different refuse derived fuel high-temperature gasification plants, as a function of particle size. The X-ray diffraction patterns showed that the materials contained large amounts of glass. This aspect was also confirmed by the results of availability and ANC leaching tests. Chemical composition indicated that Fe, Mn, Cu and Cr were the most abundant metals, with a slight enrichment in the finest fractions. Suitability of samples for inert waste landfilling and reuse was evaluated through the leaching test EN 12457-2.more » In one sample the concentration of all metals was below the limit set by law, while limits were exceeded for Cu, Cr and Ni in the other sample, where the finest fraction showed to give the main contribution to leaching of Cu and Ni. Preliminary results of physical and geotechnical characterisation indicated the suitability of vitrified BA for reuse in the field of civil engineering. The possible application of a size separation pre-treatment in order to improve the chemical characteristics of the materials was also discussed.« less
Wider-Opening Dewar Flasks for Cryogenic Storage
NASA Technical Reports Server (NTRS)
Ruemmele, Warren P.; Manry, John; Stafford, Kristin; Bue, Grant; Krejci, John; Evernden, Bent
2010-01-01
Dewar flasks have been proposed as containers for relatively long-term (25 days) storage of perishable scientific samples or other perishable objects at a temperature of 175 C. The refrigeration would be maintained through slow boiling of liquid nitrogen (LN2). For the purposes of the application for which these containers were proposed, (1) the neck openings of commercial off-the-shelf (COTS) Dewar flasks are too small for most NASA samples; (2) the round shapes of the COTS containers give rise to unacceptably low efficiency of packing in rectangular cargo compartments; and (3) the COTS containers include metal structures that are too thermally conductive, such that they cannot, without exceeding size and weight limits, hold enough LN2 for the required long-term-storage. In comparison with COTS Dewar flasks, the proposed containers would be rectangular, yet would satisfy the long-term storage requirement without exceeding size and weight limits; would have larger neck openings; and would have greater sample volumes, leading to a packing efficiency of about double the sample volume as a fraction of total volume. The proposed containers would be made partly of aerospace- type composite materials and would include vacuum walls, multilayer insulation, and aerogel insulation.
Kinematic measurement from panned cinematography.
Gervais, P; Bedingfield, E W; Wronko, C; Kollias, I; Marchiori, G; Kuntz, J; Way, N; Kuiper, D
1989-06-01
Traditional 2-D cinematography has used a stationary camera with its optical axis perpendicular to the plane of motion. This method has constrained the size of the object plane or has introduced potential errors from a small subject image size with large object field widths. The purpose of this study was to assess a panning technique that could overcome the inherent limitations of small object field widths, small object image sizes and limited movement samples. The proposed technique used a series of reference targets in the object field that provided the necessary scales and origin translations. A 102 m object field was panned. Comparisons between criterion distances and film measured distances for field widths of 46 m and 22 m resulted in absolute mean differences that were comparable to that of the traditional method.
Gallegos, Críspulo; Valencia, Concepción; Partal, Pedro; Franco, José M; Maglio, Omay; Abrahamsson, Malin; Brito-de la Fuente, Edmundo
2012-08-01
The droplet size of commercial fish oil-containing injectable lipid emulsions, including conformance to United States Pharmacopeia (USP) standards on fat-globule size, was investigated. A total of 18 batches of three multichamber parenteral products containing the emulsion SMOFlipid as a component were analyzed. Samples from multiple lots of the products were evaluated to determine compliance with standards on the volume-weighted percentage of fat exceeding 0.05% (PFAT(5)) specified in USP chapter 729 to ensure the physical stability of i.v. lipid emulsions. The products were also analyzed to determine the effects of various storage times (3, 6, 9, and 12 months) and storage temperatures (25, 30, and 40 °C) on product stability. Larger-size lipid particles were quantified via single-particle optical sensing (SPOS). The emulsion's droplet-size distribution was determined via laser light scattering. SPOS and light-scattering analysis demonstrated mean PFAT(5) values well below USP-specified globule-size limits for all the tested products under all study conditions. In addition, emulsion aging at any storage temperature in the range studied did not result in a significant increase of PFAT(5) values, and mean droplet-size values did not change significantly during storage of up to 12 months at temperatures of 25-40 °C. PFAT(5) values were below the USP upper limits in SMOFlipid samples from multiple lots of three multichamber products after up to 12 months of storage at 25 or 30 °C or 6 months of storage at 40 °C.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papelis, Charalambos; Um, Wooyong; Russel, Charles E.
2003-03-28
The specific surface area of natural and manmade solid materials is a key parameter controlling important interfacial processes in natural environments and engineered systems, including dissolution reactions and sorption processes at solid-fluid interfaces. To improve our ability to quantify the release of trace elements trapped in natural glasses, the release of hazardous compounds trapped in manmade glasses, or the release of radionuclides from nuclear melt glass, we measured the specific surface area of natural and manmade glasses as a function of particle size, morphology, and composition. Volcanic ash, volcanic tuff, tektites, obsidian glass, and in situ vitrified rock were analyzed.more » Specific surface area estimates were obtained using krypton as gas adsorbent and the BET model. The range of surface areas measured exceeded three orders of magnitude. A tektite sample had the highest surface area (1.65 m2/g), while one of the samples of in situ vitrified rock had the lowest surf ace area (0.0016 m2/g). The specific surface area of the samples was a function of particle size, decreasing with increasing particle size. Different types of materials, however, showed variable dependence on particle size, and could be assigned to one of three distinct groups: (1) samples with low surface area dependence on particle size and surface areas approximately two orders of magnitude higher than the surface area of smooth spheres of equivalent size. The specific surface area of these materials was attributed mostly to internal porosity and surface roughness. (2) samples that showed a trend of decreasing surface area dependence on particle size as the particle size increased. The minimum specific surface area of these materials was between 0.1 and 0.01 m2/g and was also attributed to internal porosity and surface roughness. (3) samples whose surface area showed a monotonic decrease with increasing particle size, never reaching an ultimate surface area limit within the particle size range examined. The surface area results were consistent with particle morphology, examined by scanning electron microscopy, and have significant implications for the release of radionuclides and toxic metals in the environment.« less
De Girolamo, A; Lippolis, V; Nordkvist, E; Visconti, A
2009-06-01
Fourier transform near-infrared spectroscopy (FT-NIR) was used for rapid and non-invasive analysis of deoxynivalenol (DON) in durum and common wheat. The relevance of using ground wheat samples with a homogeneous particle size distribution to minimize measurement variations and avoid DON segregation among particles of different sizes was established. Calibration models for durum wheat, common wheat and durum + common wheat samples, with particle size <500 microm, were obtained by using partial least squares (PLS) regression with an external validation technique. Values of root mean square error of prediction (RMSEP, 306-379 microg kg(-1)) were comparable and not too far from values of root mean square error of cross-validation (RMSECV, 470-555 microg kg(-1)). Coefficients of determination (r(2)) indicated an "approximate to good" level of prediction of the DON content by FT-NIR spectroscopy in the PLS calibration models (r(2) = 0.71-0.83), and a "good" discrimination between low and high DON contents in the PLS validation models (r(2) = 0.58-0.63). A "limited to good" practical utility of the models was ascertained by range error ratio (RER) values higher than 6. A qualitative model, based on 197 calibration samples, was developed to discriminate between blank and naturally contaminated wheat samples by setting a cut-off at 300 microg kg(-1) DON to separate the two classes. The model correctly classified 69% of the 65 validation samples with most misclassified samples (16 of 20) showing DON contamination levels quite close to the cut-off level. These findings suggest that FT-NIR analysis is suitable for the determination of DON in unprocessed wheat at levels far below the maximum permitted limits set by the European Commission.
Tan, Lingzhao; Fan, Chunyu; Zhang, Chunyu; von Gadow, Klaus; Fan, Xiuhua
2017-12-01
This study aims to establish a relationship between the sampling scale and tree species beta diversity temperate forests and to identify the underlying causes of beta diversity at different sampling scales. The data were obtained from three large observational study areas in the Changbai mountain region in northeastern China. All trees with a dbh ≥1 cm were stem-mapped and measured. The beta diversity was calculated for four different grain sizes, and the associated variances were partitioned into components explained by environmental and spatial variables to determine the contributions of environmental filtering and dispersal limitation to beta diversity. The results showed that both beta diversity and the causes of beta diversity were dependent on the sampling scale. Beta diversity decreased with increasing scales. The best-explained beta diversity variation was up to about 60% which was discovered in the secondary conifer and broad-leaved mixed forest (CBF) study area at the 40 × 40 m scale. The variation partitioning result indicated that environmental filtering showed greater effects at bigger grain sizes, while dispersal limitation was found to be more important at smaller grain sizes. What is more, the result showed an increasing explanatory ability of environmental effects with increasing sampling grains but no clearly trend of spatial effects. The study emphasized that the underlying causes of beta diversity variation may be quite different within the same region depending on varying sampling scales. Therefore, scale effects should be taken into account in future studies on beta diversity, which is critical in identifying different relative importance of spatial and environmental drivers on species composition variation.
A field examination of two measures of work motivation as predictors of leaders' influence tactics.
Barbuto, John E; Fritz, Susan M; Marx, David
2002-10-01
The authors tested 2 motivation measures, the Motivation Sources Inventory (MSI; J. E. Barbuto & R. W. Scholl, 1998) and the Job Choice Decision-Making Exercise (A. M. Harrell & M. J. Stahl, 1981) as predictors of leaders' influence tactics. The authors sampled 219 leader-member dyads from a variety of organizations and communities throughout the central United States. Results strongly favored the MSI as a predictor of influence tactics. Limitations of the study include low power of relationships, sample size as limited by the research design, and education levels of participants. Future researchers should use larger and more diverse samples and test other relevant antecedents of leaders' behaviors.
NASA Astrophysics Data System (ADS)
Salerno, K. Michael; Robbins, Mark O.
2013-12-01
Molecular dynamics simulations with varying damping are used to examine the effects of inertia and spatial dimension on sheared disordered solids in the athermal quasistatic limit. In all cases the distribution of avalanche sizes follows a power law over at least three orders of magnitude in dissipated energy or stress drop. Scaling exponents are determined using finite-size scaling for systems with 103-106 particles. Three distinct universality classes are identified corresponding to overdamped and underdamped limits, as well as a crossover damping that separates the two regimes. For each universality class, the exponent describing the avalanche distributions is the same in two and three dimensions. The spatial extent of plastic deformation is proportional to the energy dissipated in an avalanche. Both rise much more rapidly with system size in the underdamped limit where inertia is important. Inertia also lowers the mean energy of configurations sampled by the system and leads to an excess of large events like that seen in earthquake distributions for individual faults. The distribution of stress values during shear narrows to zero with increasing system size and may provide useful information about the size of elemental events in experimental systems. For overdamped and crossover systems the stress variation scales inversely with the square root of the system size. For underdamped systems the variation is determined by the size of the largest events.
Permeability and compressibility of resedimented Gulf of Mexico mudrock
NASA Astrophysics Data System (ADS)
Betts, W. S.; Flemings, P. B.; Schneider, J.
2011-12-01
We use a constant-rate-of strain consolidation test on resedimented Gulf of Mexico mudrock to determine the compression index (Cc) to be 0.618 and the expansion index (Ce) to be 0.083. We used crushed, homogenized Pliocene and Pleistocene mudrock extracted from cored wells in the Eugene Island block 330 oil field. This powdered material has a liquid limit (LL) of 87, a plastic limit (PL) of 24, and a plasticity index (PI) of 63. The particle size distribution from hydrometer analyses is approximately 65% clay-sized particles (<2 μm) with the remainder being less than 70 microns in diameter. Resedimented specimens have been used to characterize the geotechnical and geophysical behavior of soils and mudstones independent of the variability of natural samples and without the effects of sampling disturbance. Previous investigations of resedimented offshore Gulf of Mexico sediments (e.g. Mazzei, 2008) have been limited in scope. This is the first test of the homogenized Eugene Island core material. These results will be compared to in situ measurements to determine the controls on consolidation over large stress ranges.
Metapopulation models for historical inference.
Wakeley, John
2004-04-01
The genealogical process for a sample from a metapopulation, in which local populations are connected by migration and can undergo extinction and subsequent recolonization, is shown to have a relatively simple structure in the limit as the number of populations in the metapopulation approaches infinity. The result, which is an approximation to the ancestral behaviour of samples from a metapopulation with a large number of populations, is the same as that previously described for other metapopulation models, namely that the genealogical process is closely related to Kingman's unstructured coalescent. The present work considers a more general class of models that includes two kinds of extinction and recolonization, and the possibility that gamete production precedes extinction. In addition, following other recent work, this result for a metapopulation divided into many populations is shown to hold both for finite population sizes and in the usual diffusion limit, which assumes that population sizes are large. Examples illustrate when the usual diffusion limit is appropriate and when it is not. Some shortcomings and extensions of the model are considered, and the relevance of such models to understanding human history is discussed.
The distance between Mars and Venus: measuring global sex differences in personality.
Del Giudice, Marco; Booth, Tom; Irwing, Paul
2012-01-01
Sex differences in personality are believed to be comparatively small. However, research in this area has suffered from significant methodological limitations. We advance a set of guidelines for overcoming those limitations: (a) measure personality with a higher resolution than that afforded by the Big Five; (b) estimate sex differences on latent factors; and (c) assess global sex differences with multivariate effect sizes. We then apply these guidelines to a large, representative adult sample, and obtain what is presently the best estimate of global sex differences in personality. Personality measures were obtained from a large US sample (N = 10,261) with the 16PF Questionnaire. Multigroup latent variable modeling was used to estimate sex differences on individual personality dimensions, which were then aggregated to yield a multivariate effect size (Mahalanobis D). We found a global effect size D = 2.71, corresponding to an overlap of only 10% between the male and female distributions. Even excluding the factor showing the largest univariate ES, the global effect size was D = 1.71 (24% overlap). These are extremely large differences by psychological standards. The idea that there are only minor differences between the personality profiles of males and females should be rejected as based on inadequate methodology.
Morard, Raphaël; Garet-Delmas, Marie-José; Mahé, Frédéric; Romac, Sarah; Poulain, Julie; Kucera, Michal; de Vargas, Colomban
2018-02-07
Since the advent of DNA metabarcoding surveys, the planktonic realm is considered a treasure trove of diversity, inhabited by a small number of abundant taxa, and a hugely diverse and taxonomically uncharacterized consortium of rare species. Here we assess if the apparent underestimation of plankton diversity applies universally. We target planktonic foraminifera, a group of protists whose known morphological diversity is limited, taxonomically resolved and linked to ribosomal DNA barcodes. We generated a pyrosequencing dataset of ~100,000 partial 18S rRNA foraminiferal sequences from 32 size fractioned photic-zone plankton samples collected at 8 stations in the Indian and Atlantic Oceans during the Tara Oceans expedition (2009-2012). We identified 69 genetic types belonging to 41 morphotaxa in our metabarcoding dataset. The diversity saturated at local and regional scale as well as in the three size fractions and the two depths sampled indicating that the diversity of foraminifera is modest and finite. The large majority of the newly discovered lineages occur in the small size fraction, neglected by classical taxonomy. These unknown lineages dominate the bulk [>0.8 µm] size fraction, implying that a considerable part of the planktonic foraminifera community biomass has its origin in unknown lineages.
The Distance Between Mars and Venus: Measuring Global Sex Differences in Personality
Del Giudice, Marco; Booth, Tom; Irwing, Paul
2012-01-01
Background Sex differences in personality are believed to be comparatively small. However, research in this area has suffered from significant methodological limitations. We advance a set of guidelines for overcoming those limitations: (a) measure personality with a higher resolution than that afforded by the Big Five; (b) estimate sex differences on latent factors; and (c) assess global sex differences with multivariate effect sizes. We then apply these guidelines to a large, representative adult sample, and obtain what is presently the best estimate of global sex differences in personality. Methodology/Principal Findings Personality measures were obtained from a large US sample (N = 10,261) with the 16PF Questionnaire. Multigroup latent variable modeling was used to estimate sex differences on individual personality dimensions, which were then aggregated to yield a multivariate effect size (Mahalanobis D). We found a global effect size D = 2.71, corresponding to an overlap of only 10% between the male and female distributions. Even excluding the factor showing the largest univariate ES, the global effect size was D = 1.71 (24% overlap). These are extremely large differences by psychological standards. Significance The idea that there are only minor differences between the personality profiles of males and females should be rejected as based on inadequate methodology. PMID:22238596
NASA Astrophysics Data System (ADS)
Heinze, Karsta; Frank, Xavier; Lullien-Pellerin, Valérie; George, Matthieu; Radjai, Farhang; Delenne, Jean-Yves
2017-06-01
Wheat grains can be considered as a natural cemented granular material. They are milled under high forces to produce food products such as flour. The major part of the grain is the so-called starchy endosperm. It contains stiff starch granules, which show a multi-modal size distribution, and a softer protein matrix that surrounds the granules. Experimental milling studies and numerical simulations are going hand in hand to better understand the fragmentation behavior of this biological material and to improve milling performance. We present a numerical study of the effect of granule size distribution on the strength of such a cemented granular material. Samples of bi-modal starch granule size distribution were created and submitted to uniaxial tension, using a peridynamics method. We show that, when compared to the effects of starch-protein interface adhesion and voids, the granule size distribution has a limited effect on the samples' yield stress.
A novel approach for small sample size family-based association studies: sequential tests.
Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan
2011-08-01
In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.
Míguez, Diana M; Huertas, Raquel; Carrara, María V; Carnikián, Agustín; Bouvier, María E; Martínez, María J; Keel, Karen; Pioda, Carolina; Darré, Elena; Pérez, Ramiro; Viera, Santiago; Massa, Enrique
2012-04-01
Bioassays of two sites along the Rio Negro in Uruguay indicate ecotoxicity, which could be attributable to trace concentrations of lead in river sediments. Monthly samples at two sites at Baygorria and Bonete locations were analyzed for both particle size and lead. Lead was determined by atomic spectrometry in river water and sediment and particle size by sieving and sedimentation. Data showed that Baygorria's sediments have greater percentage of clay than Bonete's (20.4 and 5.8%, respectively). Lead was measurable in Baygorria's sediments, meanwhile in Bonete's, it was always below the detection limit. In water samples, lead was below detection limit at both sites. Bioassays using sub-lethal growth and survival test with Hyalella curvispina amphipod, screening with bioluminescent bacteria Photobacterium leiognathi, and acute toxicity bioassay with Pimephales promelas fish indicated toxicity at Baygorria, with much less effect at Bonete. Even though no lethal effects could be demonstrated, higher sub-lethal toxicity was found in samples from Baygorria site, showing a possible concentration of the contaminant in the clay fraction.
Draut, Amy; Rubin, David M.
2013-01-01
Flood-deposited sediment has been used to decipher environmental parameters such as variability in watershed sediment supply, paleoflood hydrology, and channel morphology. It is not well known, however, how accurately the deposits reflect sedimentary processes within the flow, and hence what sampling intensity is needed to decipher records of recent or long-past conditions. We examine these problems using deposits from dam-regulated floods in the Colorado River corridor through Marble Canyon–Grand Canyon, Arizona, U.S.A., in which steady-peaked floods represent a simple end-member case. For these simple floods, most deposits show inverse grading that reflects coarsening suspended sediment (a result of fine-sediment-supply limitation), but there is enough eddy-scale variability that some profiles show normal grading that did not reflect grain-size evolution in the flow as a whole. To infer systemwide grain-size evolution in modern or ancient depositional systems requires sampling enough deposit profiles that the standard error of the mean of grain-size-change measurements becomes small relative to the magnitude of observed changes. For simple, steady-peaked floods, 5–10 profiles or fewer may suffice to characterize grain-size trends robustly, but many more samples may be needed from deposits with greater variability in their grain-size evolution.
Urey, Carlos; Weiss, Victor U; Gondikas, Andreas; von der Kammer, Frank; Hofmann, Thilo; Marchetti-Deschmann, Martina; Allmaier, Günter; Marko-Varga, György; Andersson, Roland
2016-11-20
For drug delivery, characterization of liposomes regarding size, particle number concentrations, occurrence of low-sized liposome artefacts and drug encapsulation are of importance to understand their pharmacodynamic properties. In our study, we aimed to demonstrate the applicability of nano Electrospray Gas-Phase Electrophoretic Mobility Molecular Analyser (nES GEMMA) as a suitable technique for analyzing these parameters. We measured number-based particle concentrations, identified differences in size between nominally identical liposomal samples, and detected the presence of low-diameter material which yielded bimodal particle size distributions. Subsequently, we compared these findings to dynamic light scattering (DLS) data and results from light scattering experiments coupled to Asymmetric Flow-Field Flow Fractionation (AF4), the latter improving the detectability of smaller particles in polydisperse samples due to a size separation step prior detection. However, the bimodal size distribution could not be detected due to method inherent limitations. In contrast, cryo transmission electron microscopy corroborated nES GEMMA results. Hence, gas-phase electrophoresis proved to be a versatile tool for liposome characterization as it could analyze both vesicle size and size distribution. Finally, a correlation of nES GEMMA results with cell viability experiments was carried out to demonstrate the importance of liposome batch-to-batch control as low-sized sample components possibly impact cell viability. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Drew, L.J.; Attanasi, E.D.; Schuenemeyer, J.H.
1988-01-01
If observed oil and gas field size distributions are obtained by random samplings, the fitted distributions should approximate that of the parent population of oil and gas fields. However, empirical evidence strongly suggests that larger fields tend to be discovered earlier in the discovery process than they would be by random sampling. Economic factors also can limit the number of small fields that are developed and reported. This paper examines observed size distributions in state and federal waters of offshore Texas. Results of the analysis demonstrate how the shape of the observable size distributions change with significant hydrocarbon price changes. Comparison of state and federal observed size distributions in the offshore area shows how production cost differences also affect the shape of the observed size distribution. Methods for modifying the discovery rate estimation procedures when economic factors significantly affect the discovery sequence are presented. A primary conclusion of the analysis is that, because hydrocarbon price changes can significantly affect the observed discovery size distribution, one should not be confident about inferring the form and specific parameters of the parent field size distribution from the observed distributions. ?? 1988 International Association for Mathematical Geology.
Xu, Ning; Chamberlin, Rebecca M.; Thompson, Pam; ...
2017-10-07
This study has demonstrated that bulk plutonium chemical analysis can be performed at small scales (\\50 mg material) through three case studies. Analytical methods were developed for ICP-OES and ICP-MS instruments to measure trace impurities and gallium content in plutonium metals with comparable or improved detection limits, measurement accuracy and precision. In two case studies, the sample size has been reduced by 109, and in the third case study, by as much as 50009, so that the plutonium chemical analysis can be performed in a facility rated for lower-hazard and lower-security operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Ning; Chamberlin, Rebecca M.; Thompson, Pam
This study has demonstrated that bulk plutonium chemical analysis can be performed at small scales (\\50 mg material) through three case studies. Analytical methods were developed for ICP-OES and ICP-MS instruments to measure trace impurities and gallium content in plutonium metals with comparable or improved detection limits, measurement accuracy and precision. In two case studies, the sample size has been reduced by 109, and in the third case study, by as much as 50009, so that the plutonium chemical analysis can be performed in a facility rated for lower-hazard and lower-security operations.
Single-arm phase II trial design under parametric cure models.
Wu, Jianrong
2015-01-01
The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.
Tykot, Robert H
2016-01-01
Elemental analysis is a fundamental method of analysis on archaeological materials to address their overall composition or identify the source of their geological components, yet having access to instrumentation, its often destructive nature, and the time and cost of analyses have limited the number and/or size of archaeological artifacts tested. The development of portable X-ray fluorescence (pXRF) instruments over the past decade, however, has allowed nondestructive analyses to be conducted in museums around the world, on virtually any size artifact, producing data for up to several hundred samples per day. Major issues have been raised, however, about the sensitivity, precision, and accuracy of these devices, and the limitation of performing surface analysis on potentially heterogeneous objects. The advantages and limitations of pXRF are discussed here regarding archaeological studies of obsidian, ceramics, metals, bone, and painted materials. © The Author(s) 2015.
Comparative analyses of basal rate of metabolism in mammals: data selection does matter.
Genoud, Michel; Isler, Karin; Martin, Robert D
2018-02-01
Basal rate of metabolism (BMR) is a physiological parameter that should be measured under strictly defined experimental conditions. In comparative analyses among mammals BMR is widely used as an index of the intensity of the metabolic machinery or as a proxy for energy expenditure. Many databases with BMR values for mammals are available, but the criteria used to select metabolic data as BMR estimates have often varied and the potential effect of this variability has rarely been questioned. We provide a new, expanded BMR database reflecting compliance with standard criteria (resting, postabsorptive state; thermal neutrality; adult, non-reproductive status for females) and examine potential effects of differential selectivity on the results of comparative analyses. The database includes 1739 different entries for 817 species of mammals, compiled from the original sources. It provides information permitting assessment of the validity of each estimate and presents the value closest to a proper BMR for each entry. Using different selection criteria, several alternative data sets were extracted and used in comparative analyses of (i) the scaling of BMR to body mass and (ii) the relationship between brain mass and BMR. It was expected that results would be especially dependent on selection criteria with small sample sizes and with relatively weak relationships. Phylogenetically informed regression (phylogenetic generalized least squares, PGLS) was applied to the alternative data sets for several different clades (Mammalia, Eutheria, Metatheria, or individual orders). For Mammalia, a 'subsampling procedure' was also applied, in which random subsamples of different sample sizes were taken from each original data set and successively analysed. In each case, two data sets with identical sample size and species, but comprising BMR data with different degrees of reliability, were compared. Selection criteria had minor effects on scaling equations computed for large clades (Mammalia, Eutheria, Metatheria), although less-reliable estimates of BMR were generally about 12-20% larger than more-reliable ones. Larger effects were found with more-limited clades, such as sciuromorph rodents. For the relationship between BMR and brain mass the results of comparative analyses were found to depend strongly on the data set used, especially with more-limited, order-level clades. In fact, with small sample sizes (e.g. <100) results often appeared erratic. Subsampling revealed that sample size has a non-linear effect on the probability of a zero slope for a given relationship. Depending on the species included, results could differ dramatically, especially with small sample sizes. Overall, our findings indicate a need for due diligence when selecting BMR estimates and caution regarding results (even if seemingly significant) with small sample sizes. © 2017 Cambridge Philosophical Society.
PRIMUS/NAVCARE Cost-Effectiveness Analysis
1991-04-08
ICD-9-CM diagnosis codes that occurred most frequently in the medical record sample - 328.9 ( otitis media , unspecified) and 465.9 (upper...when attention is focused upon a single diagnosis, the MTF CECs are no longer consistently above the PRIMUS CECs. For otitis media , the MTF CECs are...CHAMPUS-EQUIVALENT COSTS FOR SELECTED DIAGNOSES 328.9 OTITIS MEDIA , UNSPECIFIED Sample Size Mean 95% Confidence Interval Upper Limit Lower
A. Broido; Hsiukang Yow
1977-01-01
Even before weight loss in the low-temperature pyrolysis of cellulose becomes significant, the average degree of polymerization of the partially pyrolyzed samples drops sharply. The gel permeation chromatograms of nitrated derivatives of the samples can be described in terms of a small number of mixed size populationsâeach component fitted within reasonable limits by a...
Contaminant bioaccumulation studies often rely on fish muscle filets as the tissue of choice for the measurement of nitrogen stable isotope ratios ( 15N) and mercury (Hg). Lethal sampling techniques may not be suitable for studies on limited populations from smaller sized aquati...
The use of coliform plate count data to assess stream sanitary and ecological condition is limited by the need to store samples at 4oC and analyze them within a 24-hour period. We are testing LH-PCR as an alternative tool to assess the bacterial load of streams, offering a cost ...
Petito Boyce, Catherine; Sax, Sonja N; Cohen, Joel M
2017-08-01
Inhalation plays an important role in exposures to lead in airborne particulate matter in occupational settings, and particle size determines where and how much of airborne lead is deposited in the respiratory tract and how much is subsequently absorbed into the body. Although some occupational airborne lead particle size data have been published, limited information is available reflecting current workplace conditions in the U.S. To address this data gap, the Battery Council International (BCI) conducted workplace monitoring studies at nine lead acid battery manufacturing facilities (BMFs) and five secondary smelter facilities (SSFs) across the U.S. This article presents the results of the BCI studies focusing on the particle size distributions calculated from Personal Marple Impactor sampling data and particle deposition estimates in each of the three major respiratory tract regions derived using the Multiple-Path Particle Dosimetry model. The BCI data showed the presence of predominantly larger-sized particles in the work environments evaluated, with average mass median aerodynamic diameters (MMADs) ranging from 21-32 µm for the three BMF job categories and from 15-25 µm for the five SSF job categories tested. The BCI data also indicated that the percentage of lead mass measured at the sampled facilities in the submicron range (i.e., <1 µm, a particle size range associated with enhanced absorption of associated lead) was generally small. The estimated average percentages of lead mass in the submicron range for the tested job categories ranged from 0.8-3.3% at the BMFs and from 0.44-6.1% at the SSFs. Variability was observed in the particle size distributions across job categories and facilities, and sensitivity analyses were conducted to explore this variability. The BCI results were compared with results reported in the scientific literature. Screening-level analyses were also conducted to explore the overall degree of lead absorption potentially associated with the observed particle size distributions and to identify key issues associated with applying such data to set occupational exposure limits for lead.
Voelz, David G; Roggemann, Michael C
2009-11-10
Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.
EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.
Tong, Xiaoxiao; Bentler, Peter M
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.
Quantitative characterisation of sedimentary grains
NASA Astrophysics Data System (ADS)
Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.
2016-04-01
Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.
Baldissera, Sandro; Ferrante, Gianluigi; Quarchioni, Elisa; Minardi, Valentina; Possenti, Valentina; Carrozzi, Giuliano; Masocco, Maria; Salmaso, Stefania
2014-04-01
Field substitution of nonrespondents can be used to maintain the planned sample size and structure in surveys but may introduce additional bias. Sample weighting is suggested as the preferable alternative; however, limited empirical evidence exists comparing the two methods. We wanted to assess the impact of substitution on surveillance results using data from Progressi delle Aziende Sanitarie per la Salute in Italia-Progress by Local Health Units towards a Healthier Italy (PASSI). PASSI is conducted by Local Health Units (LHUs) through telephone interviews of stratified random samples of residents. Nonrespondents are replaced with substitutes randomly preselected in the same LHU stratum. We compared the weighted estimates obtained in the original PASSI sample (used as a reference) and in the substitutes' sample. The differences were evaluated using a Wald test. In 2011, 50,697 units were selected: 37,252 were from the original sample and 13,445 were substitutes; 37,162 persons were interviewed. The initially planned size and demographic composition were restored. No significant differences in the estimates between the original and the substitutes' sample were found. In our experience, field substitution is an acceptable method for dealing with nonresponse, maintaining the characteristics of the original sample without affecting the results. This evidence can support appropriate decisions about planning and implementing a surveillance system. Copyright © 2014 Elsevier Inc. All rights reserved.
Monge, Susana; Ronda, Elena; Pons-Vigués, Mariona; Vives Cases, Carmen; Malmusi, Davide; Gil-González, Diana
2015-01-01
Our objective was to describe the methodological limitations and recommendations identified by authors of original articles on immigration and health in Spain. A literature review was conducted of original articles published in Spanish or English between 1998 and 2012 combining keywords on immigration and health. A total of 311 articles were included; of these, 176 (56.6%) mentioned limitations, and 15 (4.8%) made recommendations. The most frequently mentioned limitations included the following: reduced sample sizes; internal validity and sample representativeness issues, with under- or overrepresentation of specific groups; problems of validity of the collected information and missing data mostly related to measurement tools; and absence of key variables for adjustment or stratification. Based on these results, a series of recommendations are proposed to minimise common limitations and advance the quality of scientific production on immigration and health in our setting. Copyright © 2015 SESPAS. Published by Elsevier Espana. All rights reserved.
Soil carbon inventories under a bioenergy crop (switchgrass): Measurement limitations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garten, C.T. Jr.; Wullschleger, S.D.
Approximately 5 yr after planting, coarse root carbon (C) and soil organic C (SOC) inventories were compared under different types of plant cover at four switchgrass (Panicum virgatum L.) production field trials in the southeastern USA. There was significantly more coarse root C under switchgrass (Alamo variety) and forest cover than tall fescue (Festuca arundinacea Schreb.), corn (Zea mays L.), or native pastures of mixed grasses. Inventories of SOC under switchgrass were not significantly greater than SOC inventories under other plant covers. At some locations the statistical power associated with ANOVA of SOC inventories was low, which raised questions aboutmore » whether differences in SOC could be detected statistically. A minimum detectable difference (MDD) for SOC inventories was calculated. The MDD is the smallest detectable difference between treatment means once the variation, significance level, statistical power, and sample size are specified. The analysis indicated that a difference of {approx}50 mg SOC/cm{sup 2} or 5 Mg SOC/ha, which is {approx}10 to 15% of existing SOC, could be detected with reasonable sample sizes and good statistical power. The smallest difference in SOC inventories that can be detected, and only with exceedingly large sample sizes, is {approx}2 to 3%. These measurement limitations have implications for monitoring and verification of proposals to ameliorate increasing global atmospheric CO{sub 2} concentrations by sequestering C in soils.« less
Undersampling power-law size distributions: effect on the assessment of extreme natural hazards
Geist, Eric L.; Parsons, Thomas E.
2014-01-01
The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappellari, Michele
2013-11-20
The distribution of galaxies on the mass-size plane as a function of redshift or environment is a powerful test for galaxy formation models. Here we use integral-field stellar kinematics to interpret the variation of the mass-size distribution in two galaxy samples spanning extreme environmental densities. The samples are both identically and nearly mass-selected (stellar mass M {sub *} ≳ 6 × 10{sup 9} M {sub ☉}) and volume-limited. The first consists of nearby field galaxies from the ATLAS{sup 3D} parent sample. The second consists of galaxies in the Coma Cluster (Abell 1656), one of the densest environments for which good, resolvedmore » spectroscopy can be obtained. The mass-size distribution in the dense environment differs from the field one in two ways: (1) spiral galaxies are replaced by bulge-dominated disk-like fast-rotator early-type galaxies (ETGs), which follow the same mass-size relation and have the same mass distribution as in the field sample; (2) the slow-rotator ETGs are segregated in mass from the fast rotators, with their size increasing proportionally to their mass. A transition between the two processes appears around the stellar mass M {sub crit} ≈ 2 × 10{sup 11} M {sub ☉}. We interpret this as evidence for bulge growth (outside-in evolution) and bulge-related environmental quenching dominating at low masses, with little influence from merging. In contrast, significant dry mergers (inside-out evolution) and halo-related quenching drives the mass and size growth at the high-mass end. The existence of these two processes naturally explains the diverse size evolution of galaxies of different masses and the separability of mass and environmental quenching.« less
NASA Astrophysics Data System (ADS)
Lahiri, B. B.; Ranoo, Surojit; Muthukumaran, T.; Philip, John
2018-04-01
The effects of initial susceptibility and size polydispersity on magnetic hyperthermia efficiency in two water based ferrofluids containing phosphate and TMAOH coated superparamagnetic Fe3O4 nanoparticles were studied. Experiments were performed at a fixed frequency of 126 kHz on four different concentrations of both samples and under different external field amplitudes. It was observed that for field amplitudes beyond 45.0 kAm-1, the maximum temperature rise was in the vicinity of 42°C (hyperthermia limit) which indicated the suitability of the water based ferrofluids for hyperthermia applications. The maximum temperature rise and specific absorption rate were found to vary linearly with square of the applied field amplitudes, in accordance with theoretical predictions. It was further observed that for a fixed sample concentration, specific absorption rate was higher for the phosphate coated samples which was attributed to the higher initial static susceptibility and lower size polydispersity of phosphate coated Fe3O4.
NASA Astrophysics Data System (ADS)
Berthold, T.; Milbradt, P.; Berkhahn, V.
2018-04-01
This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.
Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders
2017-10-01
Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Tueller, P. T.
1977-01-01
Large scale 70mm aerial photography is a valuable supplementary tool for rangeland studies. A wide assortment of applications were developed varying from vegetation mapping to assessing environmental impact on rangelands. Color and color infrared stereo pairs are useful for effectively sampling sites limited by ground accessibility. They allow an increased sample size at similar or lower cost than ground sampling techniques and provide a permanent record.
Nanoliter hemolymph sampling and analysis of individual adult Drosophila melanogaster.
Piyankarage, Sujeewa C; Featherstone, David E; Shippy, Scott A
2012-05-15
The fruit fly (Drosophila melanogaster) is an extensively used and powerful, genetic model organism. However, chemical studies using individual flies have been limited by the animal's small size. Introduced here is a method to sample nanoliter hemolymph volumes from individual adult fruit-flies for chemical analysis. The technique results in an ability to distinguish hemolymph chemical variations with developmental stage, fly sex, and sampling conditions. Also presented is the means for two-point monitoring of hemolymph composition for individual flies.
NASA Astrophysics Data System (ADS)
Kostencka, Julianna; Kozacki, Tomasz; Hennelly, Bryan; Sheridan, John T.
2017-06-01
Holographic tomography (HT) allows noninvasive, quantitative, 3D imaging of transparent microobjects, such as living biological cells and fiber optics elements. The technique is based on acquisition of multiple scattered fields for various sample perspectives using digital holographic microscopy. Then, the captured data is processed with one of the tomographic reconstruction algorithms, which enables 3D reconstruction of refractive index distribution. In our recent works we addressed the issue of spatially variant accuracy of the HT reconstructions, which results from the insufficient model of diffraction that is applied in the widely-used tomographic reconstruction algorithms basing on the Rytov approximation. In the present study, we continue investigating the spatially variant properties of the HT imaging, however, we are now focusing on the limited spatial size of holograms as a source of this problem. Using the Wigner distribution representation and the Ewald sphere approach, we show that the limited size of the holograms results in a decreased quality of tomographic imaging in off-center regions of the HT reconstructions. This is because the finite detector extent becomes a limiting aperture that prohibits acquisition of full information about diffracted fields coming from the out-of-focus structures of a sample. The incompleteness of the data results in an effective truncation of the tomographic transfer function for the out-of-center regions of the tomographic image. In this paper, the described effect is quantitatively characterized for three types of the tomographic systems: the configuration with 1) object rotation, 2) scanning of the illumination direction, 3) the hybrid HT solution combing both previous approaches.
Ma, Li-Xin; Liu, Jian-Ping
2012-01-01
To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.
NASA Astrophysics Data System (ADS)
Guzmán, G.; Gómez, J. A.; Giráldez, J. V.
2010-05-01
Soil particle size distribution has been traditionally determined by the hydrometer or the sieve-pipette methods, both of them time consuming and requiring a relatively large soil sample. This might be a limitation in situations, such as for instance analysis of suspended sediment, when the sample is small. A possible alternative to these methods are the optical techniques such as laser diffractometry. However the literature indicates that the use of this technique as an alternative to traditional methods is still limited, because the difficulty in replicating the results obtained with the standard methods. In this study we present the percentages of soil grain size determined using laser diffractometry within ranges set between 0.04 - 2000 μm. A Beckman-Coulter ® LS-230 with a 750 nm laser beam and software version 3.2 in five soils, representative of southern Spain: Alameda, Benacazón, Conchuela, Lanjarón and Pedrera. In three of the studied soils (Alameda, Benacazón and Conchuela) the particle size distribution of each aggregate size class was also determined. Aggregate size classes were obtained by dry sieve analysis using a Retsch AS 200 basic ®. Two hundred grams of air dried soil were sieved during 150 s, at amplitude 2 mm, getting nine different sizes between 2000 μm and 10 μm. Analyses were performed by triplicate. The soil sample preparation was also adapted to our conditions. A small amount each soil sample (less than 1 g) was transferred to the fluid module full of running water and disaggregated by ultrasonication at energy level 4 and 80 ml of sodium hexametaphosphate solution during 580 seconds. Two replicates of each sample were performed. Each measurement was made for a 90 second reading at a pump speed of 62. After the laser diffractometry analysis, each soil and its aggregate classes were processed calibrating its own optical model fitting the optical parameters that mainly depends on the color and the shape of the analyzed particle. As a second alternative a unique optical model valid for a broad range of soils developed by the Department of Soil, Water, and Environmental Science of the University of Arizona (personal communication, already submitted) was tested. The results were compared with the particle size distribution measured in the same soils and aggregate classes using the hydrometer method. Preliminary results indicate a better calibration of the technique using the optical model of the Department of Soil, Water, and Environmental Science of the University of Arizona, which obtained a good correlations (r2>0.85). This result suggests that with an appropriate calibration of the optical model laser diffractometry might provide a reliable soil particle characterization.
Spacewatch Survey for Asteroids and Comets
2005-11-01
radar images. Relationship of Spacewatch to the WISE spacecraft mission: E. L. Wright of the UCLA Astronomy Dept. is the PI of the Wide-field Infrared ...Survey Explorer (WISE) MIDEX spacecraft mission. WISE will map the whole sky at thermal infrared wavelengths with 500 times more sensitivity than the...elongations. WISE=s detections in the thermal infrared will also provide a size-limited sample of asteroids instead of the brightness-limited surveys
Laboratory analyses of micron-sized solid grains: Experimental techniques and recent results
NASA Technical Reports Server (NTRS)
Colangeli, L.; Bussoletti, E.; Blanco, A.; Borghesi, A.; Fonti, S.; Orofino, V.; Schwehm, G.
1989-01-01
Morphological and spectrophotometric investigations have been extensively applied in the past years to various kinds of micron and/or submicron-sized grains formed by materials which are candidate to be present in space. The samples are produced in the laboratory and then characterized in their physio-chemical properties. Some of the most recent results obtained on various kinds of carbonaceous materials are reported. Main attention is devoted to spectroscopic results in the VUV and IR wavelength ranges, where many of the analyzed samples show typical fingerprints which can be identified also in astrophysical and cometary materials. The laboratory methodologies used so far are also critically discussed in order to point out capabilities and present limitations, in the view of possible application to returned comet samples. Suggestions are given to develop new techniques which should overcome some of the problems faced in the manipulation and analysis of micron solid samples.
A multi-particle crushing apparatus for studying rock fragmentation due to repeated impacts
NASA Astrophysics Data System (ADS)
Huang, S.; Mohanty, B.; Xia, K.
2017-12-01
Rock crushing is a common process in mining and related operations. Although a number of particle crushing tests have been proposed in the literature, most of them are concerned with single-particle crushing, i.e., a single rock sample is crushed in each test. Considering the realistic scenario in crushers where many fragments are involved, a laboratory crushing apparatus is developed in this study. This device consists of a Hopkinson pressure bar system and a piston-holder system. The Hopkinson pressure bar system is used to apply calibrated dynamic loads to the piston-holder system, and the piston-holder system is used to hold rock samples and to recover fragments for subsequent particle size analysis. The rock samples are subjected to three to seven impacts under three impact velocities (2.2, 3.8, and 5.0 m/s), with the feed size of the rock particle samples limited between 9.5 and 12.7 mm. Several key parameters are determined from this test, including particle size distribution parameters, impact velocity, loading pressure, and total work. The results show that the total work correlates well with resulting fragmentation size distribution, and the apparatus provides a useful tool for studying the mechanism of crushing, which further provides guidelines for the design of commercial crushers.
“Magnitude-based Inference”: A Statistical Review
Welsh, Alan H.; Knight, Emma J.
2015-01-01
ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387
Bērziņš, Agris; Actiņš, Andris
2014-06-01
The dehydration kinetics of mildronate dihydrate [3-(1,1,1-trimethylhydrazin-1-ium-2-yl)propionate dihydrate] was analyzed in isothermal and nonisothermal modes. The particle size, sample preparation and storage, sample weight, nitrogen flow rate, relative humidity, and sample history were varied in order to evaluate the effect of these factors and to more accurately interpret the data obtained from such analysis. It was determined that comparable kinetic parameters can be obtained in both isothermal and nonisothermal mode. However, dehydration activation energy values obtained in nonisothermal mode showed variation with conversion degree because of different rate-limiting step energy at higher temperature. Moreover, carrying out experiments in this mode required consideration of additional experimental complications. Our study of the different sample and experimental factor effect revealed information about changes of the dehydration rate-limiting step energy, variable contribution from different rate limiting steps, as well as clarified the dehydration mechanism. Procedures for convenient and fast determination of dehydration kinetic parameters were offered. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Accuracy or precision: Implications of sample design and methodology on abundance estimation
Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.
2015-01-01
Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.
Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission
NASA Technical Reports Server (NTRS)
Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.
2015-01-01
The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.
Daniel J. Isaak; Jay M. Ver Hoef; Erin E. Peterson; Dona L. Horan; David E. Nagel
2017-01-01
Population size estimates for stream fishes are important for conservation and management, but sampling costs limit the extent of most estimates to small portions of river networks that encompass 100sâ10 000s of linear kilometres. However, the advent of large fish density data sets, spatial-stream-network (SSN) models that benefit from nonindependence among samples,...
NASA Astrophysics Data System (ADS)
Fernández-Ruiz, Ramón; Friedrich K., E. Josue; Redrejo, M. J.
2018-02-01
The main goal of this work was to investigate, in a systematic way, the influence of the controlled modulation of the particle size distribution of a representative solid sample with respect to the more relevant analytical parameters of the Direct Solid Analysis (DSA) by Total-reflection X-Ray Fluorescence (TXRF) quantitative method. In particular, accuracy, uncertainty, linearity and detection limits were correlated with the main parameters of their size distributions for the following elements; Al, Si, P, S, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Rb, Sr, Ba and Pb. In all cases strong correlations were finded. The main conclusion of this work can be resumed as follows; the modulation of particles shape to lower average sizes next to a minimization of the width of particle size distributions, produce a strong increment of accuracy, minimization of uncertainties and limit of detections for DSA-TXRF methodology. These achievements allow the future use of the DSA-TXRF analytical methodology for development of ISO norms and standardized protocols for the direct analysis of solids by mean of TXRF.
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
Venkatesan, Arjun K; Gan, Wenhui; Ashani, Harsh; Herckes, Pierre; Westerhoff, Paul
2018-04-15
Phosphorus (P) is an important and often limiting element in terrestrial and aquatic ecosystem. A lack of understanding of its distribution and structures in the environment limits the design of effective P mitigation and recovery approaches. Here we developed a robust method employing size exclusion chromatography (SEC) coupled to an ICP-MS to determine the molecular weight (MW) distribution of P in environmental samples. The most abundant fraction of P varied widely in different environmental samples: (i) orthophosphate was the dominant fraction (93-100%) in one lake, two aerosols and DOC isolate samples, (ii) species of 400-600 Da range were abundant (74-100%) in two surface waters, and (iii) species of 150-350 Da range were abundant in wastewater effluents. SEC-DOC of the aqueous samples using a similar SEC column showed overlapping peaks for the 400-600 Da species in two surface waters, and for >20 kDa species in the effluents, suggesting that these fractions are likely associated with organic matter. The MW resolution and performance of SEC-ICP-MS agreed well with the time integrated results obtained using conventional ultrafiltration method. Results show that SEC in combination with ICP-MS and DOC has the potential to be a powerful and easy-to-use method in identifying unknown fractions of P in the environment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modified dough preparation for Alveograph analysis with limited flour sample size
USDA-ARS?s Scientific Manuscript database
Dough rheological characteristics, such as resistance-to-extension and extensibility, obtained by alveograph testing are important traits for determination of wheat and flour quality. A challenging issue that faces wheat breeding programs and some wheat-research projects is the relatively large flou...
Estimating Mutual Information for High-to-Low Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaud, Isaac James; Williams, Brian J.; Weaver, Brian Phillip
Presentation shows that KSG 2 is superior to KSG 1 because it scales locally automatically; KSG estimators are limited to a maximum MI due to sample size; LNC extends the capability of KSG without onerous assumptions; iLNC allows LNC to estimate information gain.
Effect of immunomagnetic bead size on recovery of foodborne pathogenic bacteria
USDA-ARS?s Scientific Manuscript database
Long culture enrichment is currently a speed-limiting step in both traditional and rapid detection techniques for foodborne pathogens. Immunomagnetic separation (IMS) as a culture-free enrichment sample preparation technique has gained increasing popularity in the development of rapid detection met...
Waks, Zeev; Weissbrod, Omer; Carmeli, Boaz; Norel, Raquel; Utro, Filippo; Goldschmidt, Yaara
2016-12-23
Compiling a comprehensive list of cancer driver genes is imperative for oncology diagnostics and drug development. While driver genes are typically discovered by analysis of tumor genomes, infrequently mutated driver genes often evade detection due to limited sample sizes. Here, we address sample size limitations by integrating tumor genomics data with a wide spectrum of gene-specific properties to search for rare drivers, functionally classify them, and detect features characteristic of driver genes. We show that our approach, CAnceR geNe similarity-based Annotator and Finder (CARNAF), enables detection of potentially novel drivers that eluded over a dozen pan-cancer/multi-tumor type studies. In particular, feature analysis reveals a highly concentrated pool of known and putative tumor suppressors among the <1% of genes that encode very large, chromatin-regulating proteins. Thus, our study highlights the need for deeper characterization of very large, epigenetic regulators in the context of cancer causality.
Cox, Alison D; Dube, Charmayne; Temple, Beverley
2015-03-01
Many individuals with intellectual disability engage in challenging behaviour. This can significantly limit quality of life and also negatively impact caregivers (e.g., direct care staff, family caregivers and teachers). Fortunately, efficacious staff training may alleviate some negative side effects of client challenging behaviour. Currently, a systematic review of studies evaluating whether staff training influences client challenging behaviour has not been conducted. The purpose of this article was to identify emerging patterns, knowledge gaps and make recommendations for future research on this topic. The literature search resulted in a total of 19 studies that met our inclusion criteria. Articles were separated into four staff training categories. Studies varied across sample size, support staff involved in training, study design, training duration and data collection strategy. A small sample size (n = 19) and few replication studies, alongside several other procedural limitations prohibited the identification of a best practice training approach. © The Author(s) 2014.
Dunbar, R I M; MacCarron, Padraig; Robertson, Cole
2018-03-01
Group-living offers both benefits (protection against predators, access to resources) and costs (increased ecological competition, the impact of group size on fertility). Here, we use cluster analysis to detect natural patternings in a comprehensive sample of baboon groups, and identify a geometric sequence with peaks at approximately 20, 40, 80 and 160. We suggest (i) that these form a set of demographic oscillators that set habitat-specific limits to group size and (ii) that the oscillator arises from a trade-off between female fertility and predation risk. © 2018 The Authors.
Robust gene selection methods using weighting schemes for microarray data analysis.
Kang, Suyeon; Song, Jongwoo
2017-09-02
A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.
A planar near-field scanning technique for bistatic radar cross section measurements
NASA Technical Reports Server (NTRS)
Tuhela-Reuning, S.; Walton, E. K.
1990-01-01
A progress report on the development of a bistatic radar cross section (RCS) measurement range is presented. A technique using one parabolic reflector and a planar scanning probe antenna is analyzed. The field pattern in the test zone is computed using a spatial array of signal sources. It achieved an illumination pattern with 1 dB amplitude and 15 degree phase ripple over the target zone. The required scan plane size is found to be proportional to the size of the desired test target. Scan plane probe sample spacing can be increased beyond the Nyquist lambda/2 limit permitting constant probe sample spacing over a range of frequencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Öztürk, Hande; Noyan, I. Cevdet
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
A fiber optic sensor for noncontact measurement of shaft speed, torque, and power
NASA Technical Reports Server (NTRS)
Madzsar, George C.
1990-01-01
A fiber optic sensor which enables noncontact measurement of the speed, torque and power of a rotating shaft was fabricated and tested. The sensor provides a direct measurement of shaft rotational speed and shaft angular twist, from which torque and power can be determined. Angles of twist between 0.005 and 10 degrees were measured. Sensor resolution is limited by the sampling rate of the analog to digital converter, while accuracy is dependent on the spot size of the focused beam on the shaft. Increasing the sampling rate improves measurement resolution, and decreasing the focused spot size increases accuracy. Digital processing allows for enhancement of an electronically or optically degraded signal.
A fiber optic sensor for noncontact measurement of shaft speed, torque and power
NASA Technical Reports Server (NTRS)
Madzsar, George C.
1990-01-01
A fiber optic sensor which enables noncontact measurement of the speed, torque and power of a rotating shaft was fabricated and tested. The sensor provides a direct measurement of shaft rotational speed and shaft angular twist, from which torque and power can be determined. Angles of twist between 0.005 and 10 degrees were measured. Sensor resolution is limited by the sampling rate of the analog to digital converter, while accuracy is dependent on the spot size of the focused beam on the shaft. Increasing the sampling rate improves measurement resolution, and decreasing the focused spot size increases accuracy. Digital processing allows for enhancement of an electronically or optically degraded signal.
Öztürk, Hande; Noyan, I. Cevdet
2017-08-24
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
An Experimental Study of Upward Burning Over Long Solid Fuels: Facility Development and Comparison
NASA Technical Reports Server (NTRS)
Kleinhenz, Julie; Yuan, Zeng-Guang
2011-01-01
As NASA's mission evolves, new spacecraft and habitat environments necessitate expanded study of materials flammability. Most of the upward burning tests to date, including the NASA standard material screening method NASA-STD-6001, have been conducted in small chambers where the flame often terminates before a steady state flame is established. In real environments, the same limitations may not be present. The use of long fuel samples would allow the flames to proceed in an unhindered manner. In order to explore sample size and chamber size effects, two large chambers were developed at NASA GRC under the Flame Prevention, Detection and Suppression (FPDS) project. The first was an existing vacuum facility, VF-13, located at NASA John Glenn Research Center. This 6350 liter chamber could accommodate fuels sample lengths up to 2 m. However, operational costs and restricted accessibility limited the test program, so a second laboratory scale facility was developed in parallel. By stacking additional two chambers on top of an existing combustion chamber facility, this 81 liter Stacked-chamber facility could accommodate a 1.5 m sample length. The larger volume, more ideal environment of VF-13 was used to obtain baseline data for comparison with the stacked chamber facility. In this way, the stacked chamber facility was intended for long term testing, with VF-13 as the proving ground. Four different solid fuels (adding machine paper, poster paper, PMMA plates, and Nomex fabric) were tested with fuel sample lengths up to 2 m. For thin samples (papers) with widths up to 5 cm, the flame reached a steady state length, which demonstrates that flame length may be stabilized even when the edge effects are reduced. For the thick PMMA plates, flames reached lengths up to 70 cm but were highly energetic and restricted by oxygen depletion. Tests with the Nomex fabric confirmed that the cyclic flame phenomena, observed in small facility tests, continued over longer sample. New features were also observed at the higher oxygen/pressure conditions available in the large chamber. Comparison of flame behavior between the two facilities under identical conditions revealed disparities, both qualitative and quantitative. This suggests that, in certain ranges of controlling parameters, chamber size and shape could be one of the parameters that affect the material flammability. If this proves to be true, it may limit the applicability of existing flammability data.
Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana
2011-01-01
Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777
Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit
2016-02-01
The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.
NASA Astrophysics Data System (ADS)
Ozen, Murat; Guler, Murat
2014-02-01
Aggregate gradation is one of the key design parameters affecting the workability and strength properties of concrete mixtures. Estimating aggregate gradation from hardened concrete samples can offer valuable insights into the quality of mixtures in terms of the degree of segregation and the amount of deviation from the specified gradation limits. In this study, a methodology is introduced to determine the particle size distribution of aggregates from 2D cross sectional images of concrete samples. The samples used in the study were fabricated from six mix designs by varying the aggregate gradation, aggregate source and maximum aggregate size with five replicates of each design combination. Each sample was cut into three pieces using a diamond saw and then scanned to obtain the cross sectional images using a desktop flatbed scanner. An algorithm is proposed to determine the optimum threshold for the image analysis of the cross sections. A procedure was also suggested to determine a suitable particle shape parameter to be used in the analysis of aggregate size distribution within each cross section. Results of analyses indicated that the optimum threshold hence the pixel distribution functions may be different even for the cross sections of an identical concrete sample. Besides, the maximum ferret diameter is the most suitable shape parameter to estimate the size distribution of aggregates when computed based on the diagonal sieve opening. The outcome of this study can be of practical value for the practitioners to evaluate concrete in terms of the degree of segregation and the bounds of mixture's gradation achieved during manufacturing.
Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie
2011-08-01
Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.
Instrumental neutron activation analysis for studying size-fractionated aerosols
NASA Astrophysics Data System (ADS)
Salma, Imre; Zemplén-Papp, Éva
1999-10-01
Instrumental neutron activation analysis (INAA) was utilized for studying aerosol samples collected into a coarse and a fine size fraction on Nuclepore polycarbonate membrane filters. As a result of the panoramic INAA, 49 elements were determined in an amount of about 200-400 μg of particulate matter by two irradiations and four γ-spectrometric measurements. The analytical calculations were performed by the absolute ( k0) standardization method. The calibration procedures, application protocol and the data evaluation process are described and discussed. They make it possible now to analyse a considerable number of samples, with assuring the quality of the results. As a means of demonstrating the system's analytical capabilities, the concentration ranges, median or mean atmospheric concentrations and detection limits are presented for an extensive series of aerosol samples collected within the framework of an urban air pollution study in Budapest. For most elements, the precision of the analysis was found to be beyond the uncertainty represented by the sampling techniques and sample variability.
Minimal-assumption inference from population-genomic data
NASA Astrophysics Data System (ADS)
Weissman, Daniel; Hallatschek, Oskar
Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.
Choi, Yoonha; Liu, Tiffany Ting; Pankratz, Daniel G; Colby, Thomas V; Barth, Neil M; Lynch, David A; Walsh, P Sean; Raghu, Ganesh; Kennedy, Giulia C; Huang, Jing
2018-05-09
We developed a classifier using RNA sequencing data that identifies the usual interstitial pneumonia (UIP) pattern for the diagnosis of idiopathic pulmonary fibrosis. We addressed significant challenges, including limited sample size, biological and technical sample heterogeneity, and reagent and assay batch effects. We identified inter- and intra-patient heterogeneity, particularly within the non-UIP group. The models classified UIP on transbronchial biopsy samples with a receiver-operating characteristic area under the curve of ~ 0.9 in cross-validation. Using in silico mixed samples in training, we prospectively defined a decision boundary to optimize specificity at ≥85%. The penalized logistic regression model showed greater reproducibility across technical replicates and was chosen as the final model. The final model showed sensitivity of 70% and specificity of 88% in the test set. We demonstrated that the suggested methodologies appropriately addressed challenges of the sample size, disease heterogeneity and technical batch effects and developed a highly accurate and robust classifier leveraging RNA sequencing for the classification of UIP.
Effect of Microstructural Interfaces on the Mechanical Response of Crystalline Metallic Materials
NASA Astrophysics Data System (ADS)
Aitken, Zachary H.
Advances in nano-scale mechanical testing have brought about progress in the understanding of physical phenomena in materials and a measure of control in the fabrication of novel materials. In contrast to bulk materials that display size-invariant mechanical properties, sub-micron metallic samples show a critical dependence on sample size. The strength of nano-scale single crystalline metals is well-described by a power-law function, sigma ∝ D-n, where D is a critical sample size and n is a experimentally-fit positive exponent. This relationship is attributed to source-driven plasticity and demonstrates a strengthening as the decreasing sample size begins to limit the size and number of dislocation sources. A full understanding of this size-dependence is complicated by the presence of microstructural features such as interfaces that can compete with the dominant dislocation-based deformation mechanisms. In this thesis, the effects of microstructural features such as grain boundaries and anisotropic crystallinity on nano-scale metals are investigated through uniaxial compression testing. We find that nano-sized Cu covered by a hard coating displays a Bauschinger effect and the emergence of this behavior can be explained through a simple dislocation-based analytic model. Al nano-pillars containing a single vertically-oriented coincident site lattice grain boundary are found to show similar deformation to single-crystalline nano-pillars with slip traces passing through the grain boundary. With increasing tilt angle of the grain boundary from the pillar axis, we observe a transition from dislocation-dominated deformation to grain boundary sliding. Crystallites are observed to shear along the grain boundary and molecular dynamics simulations reveal a mechanism of atomic migration that accommodates boundary sliding. We conclude with an analysis of the effects of inherent crystal anisotropy and alloying on the mechanical behavior of the Mg alloy, AZ31. Through comparison to pure Mg, we show that the size effect dominates the strength of samples below 10 microm, that differences in the size effect between hexagonal slip systems is due to the inherent crystal anisotropy, suggesting that the fundamental mechanism of the size effect in these slip systems is the same.
Brain size growth in wild and captive chimpanzees (Pan troglodytes).
Cofran, Zachary
2018-05-24
Despite many studies of chimpanzee brain size growth, intraspecific variation is under-explored. Brain size data from chimpanzees of the Taï Forest and the Yerkes Primate Research Center enable a unique glimpse into brain growth variation as age at death is known for individuals, allowing cross-sectional growth curves to be estimated. Because Taï chimpanzees are from the wild but Yerkes apes are captive, potential environmental effects on neural development can also be explored. Previous research has revealed differences in growth and health between wild and captive primates, but such habitat effects have yet to be investigated for brain growth. Here, I use an iterative curve fitting procedure to estimate brain growth and regression parameters for each population, statistically comparing growth models using bootstrapped confidence intervals. Yerkes and Taï brain sizes overlap at all ages, although the sole Taï newborn is at the low end of captive neonatal variation. Growth rate and duration are statistically indistinguishable between the two populations. Resampling the Yerkes sample to match the Taï sample size and age group composition shows that ontogenetic variation in the two groups are remarkably similar despite the latter's limited size. Best fit growth curves for each sample indicate cessation of brain size growth at around 2 years, earlier than has previously been reported. The overall similarity between wild and captive chimpanzees points to the canalization of brain growth in this species. © 2018 Wiley Periodicals, Inc.
Nanoparticle size detection limits by single particle ICP-MS for 40 elements.
Lee, Sungyun; Bi, Xiangyu; Reed, Robert B; Ranville, James F; Herckes, Pierre; Westerhoff, Paul
2014-09-02
The quantification and characterization of natural, engineered, and incidental nano- to micro-size particles are beneficial to assessing a nanomaterial's performance in manufacturing, their fate and transport in the environment, and their potential risk to human health. Single particle inductively coupled plasma mass spectrometry (spICP-MS) can sensitively quantify the amount and size distribution of metallic nanoparticles suspended in aqueous matrices. To accurately obtain the nanoparticle size distribution, it is critical to have knowledge of the size detection limit (denoted as Dmin) using spICP-MS for a wide range of elements (other than a few available assessed ones) that have been or will be synthesized into engineered nanoparticles. Herein is described a method to estimate the size detection limit using spICP-MS and then apply it to nanoparticles composed of 40 different elements. The calculated Dmin values correspond well for a few of the elements with their detectable sizes that are available in the literature. Assuming each nanoparticle sample is composed of one element, Dmin values vary substantially among the 40 elements: Ta, U, Ir, Rh, Th, Ce, and Hf showed the lowest Dmin values, ≤10 nm; Bi, W, In, Pb, Pt, Ag, Au, Tl, Pd, Y, Ru, Cd, and Sb had Dmin in the range of 11-20 nm; Dmin values of Co, Sr, Sn, Zr, Ba, Te, Mo, Ni, V, Cu, Cr, Mg, Zn, Fe, Al, Li, and Ti were located at 21-80 nm; and Se, Ca, and Si showed high Dmin values, greater than 200 nm. A range of parameters that influence the Dmin, such as instrument sensitivity, nanoparticle density, and background noise, is demonstrated. It is observed that, when the background noise is low, the instrument sensitivity and nanoparticle density dominate the Dmin significantly. Approaches for reducing the Dmin, e.g., collision cell technology (CCT) and analyte isotope selection, are also discussed. To validate the Dmin estimation approach, size distributions for three engineered nanoparticle samples were obtained using spICP-MS. The use of this methodology confirms that the observed minimum detectable sizes are consistent with the calculated Dmin values. Overall, this work identifies the elements and nanoparticles to which current spICP-MS approaches can be applied, in order to enable quantification of very small nanoparticles at low concentrations in aqueous media.
Olsen, Kim Rose; Sørensen, Torben Højmark; Gyrd-Hansen, Dorte
2010-04-19
Due to shortage of general practitioners, it may be necessary to improve productivity. We assess the association between productivity, list size and patient- and practice characteristics. A regression approach is used to perform productivity analysis based on national register data and survey data for 1,758 practices. Practices are divided into four groups according to list size and productivity. Statistical tests are used to assess differences in patient- and practice characteristics. There is a significant, positive correlation between list size and productivity (p < 0.01). Nevertheless, 19% of the practices have a list size below and a productivity above mean sample values. These practices have relatively demanding patients (older, low socioeconomic status, high use of pharmaceuticals) and they are frequently located in areas with limited access to specialized care and have a low use of assisting personnel. 13% of the practices have a list size above and a productivity below mean sample values. These practices have relatively less demanding patients, are located in areas with good access to specialized care, and have a high use of assisting personnel. Lists and practice characteristics have substantial influence on both productivity and list size. Adjusting list size to external factors seems to be an effective tool to increase productivity in general practice.
The U.S. Geological Survey coal quality (COALQUAL) database version 3.0
Palmer, Curtis A.; Oman, Charles L.; Park, Andy J.; Luppens, James A.
2015-12-21
Because of database size limits during the development of COALQUAL Version 1.3, many analyses of individual bench samples were merged into whole coal bed averages. The methodology for making these composite intervals was not consistent. Size limits also restricted the amount of georeferencing information and forced removal of qualifier notations such as "less than detection limit" (<) information, which can cause problems when using the data. A review of the original data sheets revealed that COALQUAL Version 2.0 was missing information that was needed for a complete understanding of a coal section. Another important database issue to resolve was the USGS "remnant moisture" problem. Prior to 1998, tests for remnant moisture (as-determined moisture in the sample at the time of analysis) were not performed on any USGS major, minor, or trace element coal analyses. Without the remnant moisture, it is impossible to convert the analyses to a usable basis (as-received, dry, etc.). Based on remnant moisture analyses of hundreds of samples of different ranks (and known residual moisture) reported after 1998, it was possible to develop a method to provide reasonable estimates of remnant moisture for older data to make it more useful in COALQUAL Version 3.0. In addition, COALQUAL Version 3.0 is improved by (1) adding qualifiers, including statistical programming to deal with the qualifiers; (2) clarifying the sample compositing problems; and (3) adding associated samples. Version 3.0 of COALQUAL also represents the first attempt to incorporate data verification by mathematically crosschecking certain analytical parameters. Finally, a new database system was designed and implemented to replace the outdated DOS program used in earlier versions of the database.
Khan, Bilal; Lee, Hsuan-Wei; Fellows, Ian; Dombrowski, Kirk
2018-01-01
Size estimation is particularly important for populations whose members experience disproportionate health issues or pose elevated health risks to the ambient social structures in which they are embedded. Efforts to derive size estimates are often frustrated when the population is hidden or hard-to-reach in ways that preclude conventional survey strategies, as is the case when social stigma is associated with group membership or when group members are involved in illegal activities. This paper extends prior research on the problem of network population size estimation, building on established survey/sampling methodologies commonly used with hard-to-reach groups. Three novel one-step, network-based population size estimators are presented, for use in the context of uniform random sampling, respondent-driven sampling, and when networks exhibit significant clustering effects. We give provably sufficient conditions for the consistency of these estimators in large configuration networks. Simulation experiments across a wide range of synthetic network topologies validate the performance of the estimators, which also perform well on a real-world location-based social networking data set with significant clustering. Finally, the proposed schemes are extended to allow them to be used in settings where participant anonymity is required. Systematic experiments show favorable tradeoffs between anonymity guarantees and estimator performance. Taken together, we demonstrate that reasonable population size estimates are derived from anonymous respondent driven samples of 250-750 individuals, within ambient populations of 5,000-40,000. The method thus represents a novel and cost-effective means for health planners and those agencies concerned with health and disease surveillance to estimate the size of hidden populations. We discuss limitations and future work in the concluding section.
ERIC Educational Resources Information Center
Baenen, Nancy
2011-01-01
The longitudinal study of the 2005-06 preschool in Wake County Public School System (WCPSS) found short-term gains during the preschool year, but limited impact by kindergarten and no average impact by the end of 3rd grade on achievement, retention rates, special education placements, or attendance. Small sample sizes limit conclusions that can be…
An elutriation apparatus for assessing settleability of combined sewer overflows (CSOs).
Marsalek, J; Krishnappan, B G; Exall, K; Rochfort, Q; Stephens, R P
2006-01-01
An elutriation apparatus was proposed for testing the settleability of combined sewer outflows (CSOs) and applied to 12 CSO samples. In this apparatus, solids settling is measured under dynamic conditions created by flow through a series of settling chambers of varying diameters and upward flow velocities. Such a procedure reproduces better turbulent settling in CSO tanks than the conventional settling columns, and facilitates testing coagulant additions under dynamic conditions. Among the limitations, one could name the relatively large size of the apparatus and samples (60 L), and inadequate handling of floatables. Settleability results obtained for the elutriation apparatus and a conventional settling column indicate large inter-event variation in CSO settleability. Under such circumstances, settling tanks need to be designed for "average" conditions and, within some limits, the differences in test results produced by various settleability testing apparatuses and procedures may be acceptable. Further development of the elutriation apparatus is under way, focusing on reducing flow velocities in the tubing connecting settling chambers and reducing the number of settling chambers employed. The first measure would reduce the risk of floc breakage in the connecting tubing and the second one would reduce the required sample size.
Wolbers, Marcel; Heemskerk, Dorothee; Chau, Tran Thi Hong; Yen, Nguyen Thi Bich; Caws, Maxine; Farrar, Jeremy; Day, Jeremy
2011-02-02
In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 x 2 factorial design. We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 x 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Current Controlled Trials ISRCTN61649292.
Paquet, Victor; Joseph, Caroline; D'Souza, Clive
2012-01-01
Anthropometric studies typically require a large number of individuals that are selected in a manner so that demographic characteristics that impact body size and function are proportionally representative of a user population. This sampling approach does not allow for an efficient characterization of the distribution of body sizes and functions of sub-groups within a population and the demographic characteristics of user populations can often change with time, limiting the application of the anthropometric data in design. The objective of this study is to demonstrate how demographically representative user populations can be developed from samples that are not proportionally representative in order to improve the application of anthropometric data in design. An engineering anthropometry problem of door width and clear floor space width is used to illustrate the value of the approach.
Utilizing soil polypedons to improve model performance for digital soil mapping
USDA-ARS?s Scientific Manuscript database
Most digital soil mapping approaches that use point data to develop relationships with covariate data intersect sample locations with one raster pixel regardless of pixel size. Resulting models are subject to spurious values in covariate data which may limit model performance. An alternative approac...
Solid phase extraction of 2,4-D from human urine.
Thompson, T S; Treble, R G
1996-10-01
A method for determining urinary concentrations of 2,4-D in samples collected from non-occupationally, environmentally exposed individuals was developed. The 2,4-D was extracted from fortified human urine samples using octadecylsilane solid phase extraction cartridges. The average percent recovery for urine samples spiked at 2 and 20 ng/mL was 100% and 93%, respectively. The method detection limit was estimated to be 0.75 ng of 2,4-D per mL of urine based on a 10 mL sample size. The potential use of 2,4-dichlorophenylacetic acid as a surrogate standard was also investigated.
NASA Astrophysics Data System (ADS)
Andrews, Stephen K.; Kelvin, Lee S.; Driver, Simon P.; Robotham, Aaron S. G.
2014-01-01
The 2MASS, UKIDSS-LAS, and VISTA VIKING surveys have all now observed the GAMA 9hr region in the Ks band. Here we compare the detection rates, photometry, basic size measurements, and single-component GALFIT structural measurements for a sample of 37 591 galaxies. We explore the sensitivity limits where the data agree for a variety of issues including: detection, star-galaxy separation, photometric measurements, size and ellipticity measurements, and Sérsic measurements. We find that 2MASS fails to detect at least 20% of the galaxy population within all magnitude bins, however for those that are detected we find photometry is robust (± 0.2 mag) to 14.7 AB mag and star-galaxy separation to 14.8 AB mag. For UKIDSS-LAS we find incompleteness starts to enter at a flux limit of 18.9 AB mag, star-galaxy separation is robust to 16.3 AB mag, and structural measurements are robust to 17.7 AB mag. VISTA VIKING data are complete to approximately 20.0 AB mag and structural measurements appear robust to 18.8 AB mag.
An Analysis of Several Dimensions of Patient Safety in Ambulatory-Care Facilities
2008-04-09
States were surveyed for a total sample size (N) of 213 and an overall response rate of 65%. Specialty areas, ambulatory-surgical staff, administrative...questions regarding safety. This research was limited in that it sampled only Air Force primary care staff and should certainly be replicated...What is its essence? The Greeks answered the question about substance in the sixth century with descriptions of the four fundamental elements : earth
An Investigation of Community Attitudes Toward Blast Noise: Complaint Survey Protocol
2010-10-11
increase complaints (Hume et al., 2003a). If an individual is already stressed by other non-noise factors, the source noise many be more annoying than...protocol (lab staffing, sampling and locating records, callback schedules) focused on completing the data collection for any given noise event within...relationship (e.g., increased feelings of importance of the installation tend to be associated with decreased annoyance). Due to the limited sample size only
Rapid Method of Determining Factors Limiting Bacterial Growth in Soil
Aldén, L.; Demoling, F.; Bååth, E.
2001-01-01
A technique to determine which nutrients limit bacterial growth in soil was developed. The method was based on measuring the thymidine incorporation rate of bacteria after the addition of C, N, and P in different combinations to soil samples. First, the thymidine incorporation method was tested in two different soils: an agricultural soil and a forest humus soil. Carbon (as glucose) was found to be the limiting substance for bacterial growth in both of these soils. The effect of adding different amounts of nutrients was studied, and tests were performed to determine whether the additions affected the soil pH and subsequent bacterial activity. The incubation time required to detect bacterial growth after adding substrate to the soil was also evaluated. Second, the method was used in experiments in which three different size fractions of straw (1 to 2, 0.25 to 1, and <0.25 mm) were mixed into the agricultural soil in order to induce N limitation for bacterial growth. When the straw fraction was small enough (<0.25 mm), N became the limiting nutrient for bacterial growth after about 3 weeks. After the addition of the larger straw fractions (1 to 2 and 0.25 to 1 mm), the soil bacteria were C limited throughout the incubation period (10 weeks), although an increase in the thymidine incorporation rate after the addition of C and N together compared with adding them separately was seen in the sample containing the size fraction from 0.25 to 1 mm. Third, soils from high-pH, limestone-rich areas were examined. P limitation was observed in one of these soils, while tendencies toward P limitation were seen in some of the other soils. PMID:11282640
Single-image diffusion coefficient measurements of proteins in free solution.
Zareh, Shannon Kian; DeSantis, Michael C; Kessler, Jonathan M; Li, Je-Luen; Wang, Y M
2012-04-04
Diffusion coefficient measurements are important for many biological and material investigations, such as studies of particle dynamics and kinetics, and size determinations. Among current measurement methods, single particle tracking (SPT) offers the unique ability to simultaneously obtain location and diffusion information about a molecule while using only femtomoles of sample. However, the temporal resolution of SPT is limited to seconds for single-color-labeled samples. By directly imaging three-dimensional diffusing fluorescent proteins and studying the widths of their intensity profiles, we were able to determine the proteins' diffusion coefficients using single protein images of submillisecond exposure times. This simple method improves the temporal resolution of diffusion coefficient measurements to submilliseconds, and can be readily applied to a range of particle sizes in SPT investigations and applications in which diffusion coefficient measurements are needed, such as reaction kinetics and particle size determinations. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Levecke, Bruno; Speybroeck, Niko; Dobson, Robert J.; Vercruysse, Jozef; Charlier, Johannes
2011-01-01
Background The fecal egg count reduction test (FECRT) is recommended to monitor drug efficacy against soil-transmitted helminths (STHs) in public health. However, the impact of factors inherent to study design (sample size and detection limit of the fecal egg count (FEC) method) and host-parasite interactions (mean baseline FEC and aggregation of FEC across host population) on the reliability of FECRT is poorly understood. Methodology/Principal Findings A simulation study was performed in which FECRT was assessed under varying conditions of the aforementioned factors. Classification trees were built to explore critical values for these factors required to obtain conclusive FECRT results. The outcome of this analysis was subsequently validated on five efficacy trials across Africa, Asia, and Latin America. Unsatisfactory (<85.0%) sensitivity and specificity results to detect reduced efficacy were found if sample sizes were small (<10) or if sample sizes were moderate (10–49) combined with highly aggregated FEC (k<0.25). FECRT remained inconclusive under any evaluated condition for drug efficacies ranging from 87.5% to 92.5% for a reduced-efficacy-threshold of 90% and from 92.5% to 97.5% for a threshold of 95%. The most discriminatory study design required 200 subjects independent of STH status (including subjects who are not excreting eggs). For this sample size, the detection limit of the FEC method and the level of aggregation of the FEC did not affect the interpretation of the FECRT. Only for a threshold of 90%, mean baseline FEC <150 eggs per gram of stool led to a reduced discriminatory power. Conclusions/Significance This study confirms that the interpretation of FECRT is affected by a complex interplay of factors inherent to both study design and host-parasite interactions. The results also highlight that revision of the current World Health Organization guidelines to monitor drug efficacy is indicated. We, therefore, propose novel guidelines to support future monitoring programs. PMID:22180801
Strategies for high-throughput focused-beam ptychography
Jacobsen, Chris; Deng, Junjing; Nashed, Youssef
2017-08-08
X-ray ptychography is being utilized for a wide range of imaging experiments with a resolution beyond the limit of the X-ray optics used. Introducing a parameter for the ptychographic resolution gainG p(the ratio of the beam size over the achieved pixel size in the reconstructed image), strategies for data sampling and for increasing imaging throughput when the specimen is at the focus of an X-ray beam are considered. As a result, the tradeoffs between large and small illumination spots are examined.
Strategies for high-throughput focused-beam ptychography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobsen, Chris; Deng, Junjing; Nashed, Youssef
X-ray ptychography is being utilized for a wide range of imaging experiments with a resolution beyond the limit of the X-ray optics used. Introducing a parameter for the ptychographic resolution gainG p(the ratio of the beam size over the achieved pixel size in the reconstructed image), strategies for data sampling and for increasing imaging throughput when the specimen is at the focus of an X-ray beam are considered. As a result, the tradeoffs between large and small illumination spots are examined.
Astley, H C; Abbott, E M; Azizi, E; Marsh, R L; Roberts, T J
2013-11-01
Maximal performance is an essential metric for understanding many aspects of an organism's biology, but it can be difficult to determine because a measured maximum may reflect only a peak level of effort, not a physiological limit. We used a unique opportunity provided by a frog jumping contest to evaluate the validity of existing laboratory estimates of maximum jumping performance in bullfrogs (Rana catesbeiana). We recorded video of 3124 bullfrog jumps over the course of the 4-day contest at the Calaveras County Jumping Frog Jubilee, and determined jump distance from these images and a calibration of the jump arena. Frogs were divided into two groups: 'rental' frogs collected by fair organizers and jumped by the general public, and frogs collected and jumped by experienced, 'professional' teams. A total of 58% of recorded jumps surpassed the maximum jump distance in the literature (1.295 m), and the longest jump was 2.2 m. Compared with rental frogs, professionally jumped frogs jumped farther, and the distribution of jump distances for this group was skewed towards long jumps. Calculated muscular work, historical records and the skewed distribution of jump distances all suggest that the longest jumps represent the true performance limit for this species. Using resampling, we estimated the probability of observing a given jump distance for various sample sizes, showing that large sample sizes are required to detect rare maximal jumps. These results show the importance of sample size, animal motivation and physiological conditions for accurate maximal performance estimates.
Namera, Akira; Saito, Takeshi; Ota, Shigenori; Miyazaki, Shota; Oikawa, Hiroshi; Murata, Kazuhiro; Nagao, Masataka
2017-09-29
Monolithic silica in MonoSpin for solid-phase extraction of drugs from whole blood samples was developed to facilitate high-throughput analysis. Monolithic silica of various pore sizes and octadecyl contents were synthesized, and their effects on recovery rates were evaluated. The silica monolith M18-200 (20μm through-pore size, 10.4nm mesopore size, and 17.3% carbon content) achieved the best recovery of the target analytes in whole blood samples. The extraction proceeded with centrifugal force at 1000rpm for 2min, and the eluate was directly injected into the liquid chromatography-mass spectrometry system without any tedious steps such as evaporation of extraction solvents. Under the optimized condition, low detection limits of 0.5-2.0ngmL -1 and calibration ranges up to 1000ngmL -1 were obtained. The recoveries of the target drugs in the whole blood were 76-108% with relative standard deviation of less than 14.3%. These results indicate that the developed method based on monolithic silica is convenient, highly efficient, and applicable for detecting drugs in whole blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Experience of elder abuse among older Korean immigrants.
Chang, Miya
2016-01-01
Studies on the scope and nature of Asian American elder abuse conducted with older immigrants are extremely limited. The overall purpose of this study was to examine the extent and type of elder abuse among older Korean immigrants, and to investigate critical predictors of elder abuse in this population. The sample consisted of 200 older Korean immigrants aged 60 to 90 years who resided in Los Angeles County in 2008. One of the key findings indicated that 58.3% of respondents experienced one or more types of elder abuse. Logistic regression indicated that the victims' health status and educational level were statistically significant predictors of the likelihood of experiencing abuse. The present study, although limited in sample size, measures, sampling methods, and population representation, has contributed to this important area of knowledge. It is recommended that future studies conduct research on elder abuse with more representative national samples that can measure the extent of abuse and neglect more accurately.
Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.
Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira
2016-01-01
Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.
Influence of androgen receptor repeat polymorphisms on personality traits in men
Westberg, Lars; Henningsson, Susanne; Landén, Mikael; Annerbrink, Kristina; Melke, Jonas; Nilsson, Staffan; Rosmond, Roland; Holm, Göran; Anckarsäter, Henrik; Eriksson, Elias
2009-01-01
Background Testosterone has been attributed importance for various aspects of behaviour. The aim of our study was to investigate the potential influence of 2 functional polymorphisms in the amino terminal of the androgen receptor on personality traits in men. Methods We assessed and genotyped 141 men born in 1944 recruited from the general population. We used 2 different instruments: the Karolinska Scales of Personality and the Temperament and Character Inventory. For replication, we similarly assessed 63 men recruited from a forensic psychiatry study group. Results In the population-recruited sample, the lengths of the androgen receptor repeats were associated with neuroticism, extraversion and self-transcendence. The association with extraversion was replicated in the independent sample. Limitations Our 2 samples differed in size; sample 1 was of moderate size and sample 2 was small. In addition, the homogeneity of sample 1 probably enhanced our ability to detect significant associations between genotype and phenotype. Conclusion Our results suggest that the repeat polymorphisms in the androgen receptor gene may influence personality traits in men. PMID:19448851
[Comparative quality measurements part 3: funnel plots].
Kottner, Jan; Lahmann, Nils
2014-02-01
Comparative quality measurements between organisations or institutions are common. Quality measures need to be standardised and risk adjusted. Random error must also be taken adequately into account. Rankings without consideration of the precision lead to flawed interpretations and enhances "gaming". Application of confidence intervals is one possibility to take chance variation into account. Funnel plots are modified control charts based on Statistical Process Control (SPC) theory. The quality measures are plotted against their sample size. Warning and control limits that are 2 or 3 standard deviations from the center line are added. With increasing group size the precision increases and so the control limits are forming a funnel. Data points within the control limits are considered to show common cause variation; data points outside special cause variation without the focus of spurious rankings. Funnel plots offer data based information about how to evaluate institutional performance within quality management contexts.
NASA Astrophysics Data System (ADS)
Raefat, Saad; Garoum, Mohammed; Laaroussi, Najma; Thiam, Macodou; Amarray, Khaoula
2017-07-01
In this work experimental investigation of apparent thermal diffusivity and adiabatic limit temperature of expanded granular perlite mixes has been made using the flash technic. Perlite granulates were sieved to produce essentially three characteristic grain sizes. The consolidated samples were manufactured by mixing controlled proportions of the plaster and water. The effect of the particle size on the diffusivity was examined. The inverse estimation of the diffusivity and the adiabatic limit temperature at the rear face as well as the heat losses coefficients were performed using several numerical global minimization procedures. The function to be minimized is the quadratic distance between the experimental temperature rise at the rear face and the analytical model derived from the one dimension heat conduction. It is shown that, for all granulometry tested, the estimated parameters lead to a good agreement between the mathematical model and experimental data.
Anti-Depressants, Suicide, and Drug Regulation
ERIC Educational Resources Information Center
Ludwig, Jens; Marcotte, Dave E.
2005-01-01
Policymakers are increasingly concerned that a relatively new class of anti-depressant drugs, selective serotonin re-uptake inhibitors (SSRI), may increase the risk of suicide for at least some patients, particularly children. Prior randomized trials are not informative on this question because of small sample sizes and other limitations. Using…
Determining water sensitive card spread factors for real world tank mixes
USDA-ARS?s Scientific Manuscript database
The use of water sensitive cards provides a quick and easy method to sample the coverage and deposition from spray applications. Typically, this measure is limited to percent coverage as measures of droplet size, and thus deposition rate, are highly influenced by the stain diameter resulting from t...
ERIC Educational Resources Information Center
Begeny, John C.; Krouse, Hailey E.; Brown, Kristina G.; Mann, Courtney M.
2011-01-01
Teacher judgments about students' academic abilities are important for instructional decision making and potential special education entitlement decisions. However, the small number of studies evaluating teachers' judgments are limited methodologically (e.g., sample size, procedural sophistication) and have yet to answer important questions…
Going bananas in the radiation laboratory
NASA Astrophysics Data System (ADS)
Hoeling, Barbara; Reed, Douglas; Siegel, P. B.
1999-05-01
A simple setup for measuring the amount of potassium in foods is described. A 3-in. NaI detector is used to measure samples that are 3000 cm3 in size. With moderate shielding, the potassium content can be measured down to a detection limit of a few parts per 10 000.
Behavioral Phenotype in Adults with Prader-Willi Syndrome
ERIC Educational Resources Information Center
Sinnema, Margje; Einfeld, Stewart L.; Schrander-Stumpel, Constance T. R. M.; Maaskant, Marian A.; Boer, Harm; Curfs, Leopold M. G.
2011-01-01
Prader-Willi syndrome (PWS) is characterized by temper tantrums, impulsivity, mood fluctuations, difficulty with change in routine, skinpicking, stubbornness and aggression. Many studies on behavior in PWS are limited by sample size, age range, a lack of genetically confirmed diagnosis of PWS and inconsistent assessment of behavior. The aim of…
Sample Size Limits for Estimating Upper Level Mediation Models Using Multilevel SEM
ERIC Educational Resources Information Center
Li, Xin; Beretvas, S. Natasha
2013-01-01
This simulation study investigated use of the multilevel structural equation model (MLSEM) for handling measurement error in both mediator and outcome variables ("M" and "Y") in an upper level multilevel mediation model. Mediation and outcome variable indicators were generated with measurement error. Parameter and standard…
Model calibration and validation for OFMSW and sewage sludge co-digestion reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esposito, G., E-mail: giovanni.esposito@unicas.it; Frunzo, L., E-mail: luigi.frunzo@unina.it; Panico, A., E-mail: anpanico@unina.it
2011-12-15
Highlights: > Disintegration is the limiting step of the anaerobic co-digestion process. > Disintegration kinetic constant does not depend on the waste particle size. > Disintegration kinetic constant depends only on the waste nature and composition. > The model calibration can be performed on organic waste of any particle size. - Abstract: A mathematical model has recently been proposed by the authors to simulate the biochemical processes that prevail in a co-digestion reactor fed with sewage sludge and the organic fraction of municipal solid waste. This model is based on the Anaerobic Digestion Model no. 1 of the International Watermore » Association, which has been extended to include the co-digestion processes, using surface-based kinetics to model the organic waste disintegration and conversion to carbohydrates, proteins and lipids. When organic waste solids are present in the reactor influent, the disintegration process is the rate-limiting step of the overall co-digestion process. The main advantage of the proposed modeling approach is that the kinetic constant of such a process does not depend on the waste particle size distribution (PSD) and rather depends only on the nature and composition of the waste particles. The model calibration aimed to assess the kinetic constant of the disintegration process can therefore be conducted using organic waste samples of any PSD, and the resulting value will be suitable for all the organic wastes of the same nature as the investigated samples, independently of their PSD. This assumption was proven in this study by biomethane potential experiments that were conducted on organic waste samples with different particle sizes. The results of these experiments were used to calibrate and validate the mathematical model, resulting in a good agreement between the simulated and observed data for any investigated particle size of the solid waste. This study confirms the strength of the proposed model and calibration procedure, which can thus be used to assess the treatment efficiency and predict the methane production of full-scale digesters.« less
Ahrenstorff, Tyler D.; Diana, James S.; Fetzer, William W.; Jones, Thomas S.; Lawson, Zach J.; McInerny, Michael C.; Santucci, Victor J.; Vander Zanden, M. Jake
2018-01-01
Body size governs predator-prey interactions, which in turn structure populations, communities, and food webs. Understanding predator-prey size relationships is valuable from a theoretical perspective, in basic research, and for management applications. However, predator-prey size data are limited and costly to acquire. We quantified predator-prey total length and mass relationships for several freshwater piscivorous taxa: crappie (Pomoxis spp.), largemouth bass (Micropterus salmoides), muskellunge (Esox masquinongy), northern pike (Esox lucius), rock bass (Ambloplites rupestris), smallmouth bass (Micropterus dolomieu), and walleye (Sander vitreus). The range of prey total lengths increased with predator total length. The median and maximum ingested prey total length varied with predator taxon and length, but generally ranged from 10–20% and 32–46% of predator total length, respectively. Predators tended to consume larger fusiform prey than laterally compressed prey. With the exception of large muskellunge, predators most commonly consumed prey between 16 and 73 mm. A sensitivity analysis indicated estimates can be very accurate at sample sizes greater than 1,000 diet items and fairly accurate at sample sizes greater than 100. However, sample sizes less than 50 should be evaluated with caution. Furthermore, median log10 predator-prey body mass ratios ranged from 1.9–2.5, nearly 50% lower than values previously reported for freshwater fishes. Managers, researchers, and modelers could use our findings as a tool for numerous predator-prey evaluations from stocking size optimization to individual-based bioenergetics analyses identifying prey size structure. To this end, we have developed a web-based user interface to maximize the utility of our models that can be found at www.LakeEcologyLab.org/pred_prey. PMID:29543856
Development of a Multiple-Stage Differential Mobility Analyzer (MDMA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Da-Ren; Cheng, Mengdawn
2007-01-01
A new DMA column has been designed with the capability of simultaneously extracting monodisperse particles of different sizes in multiple stages. We call this design a multistage DMA, or MDMA. A prototype MDMA has been constructed and experimentally evaluated in this study. The new column enables the fast measurement of particles in a wide size range, while preserving the powerful particle classification function of a DMA. The prototype MDMA has three sampling stages, capable of classifying monodisperse particles of three different sizes simultaneously. The scanning voltage operation of a DMA can be applied to this new column. Each stage ofmore » MDMA column covers a fraction of the entire particle size range to be measured. The covered size fractions of two adjacent stages of the MDMA are designed somewhat overlapped. The arrangement leads to the reduction of scanning voltage range and thus the cycling time of the measurement. The modular sampling stage design of the MDMA allows the flexible configuration of desired particle classification lengths and variable number of stages in the MDMA. The design of our MDMA also permits operation at high sheath flow, enabling high-resolution particle size measurement and/or reduction of the lower sizing limit. Using the tandem DMA technique, the performance of the MDMA, i.e., sizing accuracy, resolution, and transmission efficiency, was evaluated at different ratios of aerosol and sheath flowrates. Two aerosol sampling schemes were investigated. One was to extract aerosol flows at an evenly partitioned flowrate at each stage, and the other was to extract aerosol at a rate the same as the polydisperse aerosol flowrate at each stage. We detail the prototype design of the MDMA and the evaluation result on the transfer functions of the MDMA at different particle sizes and operational conditions.« less
Gaeta, Jereme W; Ahrenstorff, Tyler D; Diana, James S; Fetzer, William W; Jones, Thomas S; Lawson, Zach J; McInerny, Michael C; Santucci, Victor J; Vander Zanden, M Jake
2018-01-01
Body size governs predator-prey interactions, which in turn structure populations, communities, and food webs. Understanding predator-prey size relationships is valuable from a theoretical perspective, in basic research, and for management applications. However, predator-prey size data are limited and costly to acquire. We quantified predator-prey total length and mass relationships for several freshwater piscivorous taxa: crappie (Pomoxis spp.), largemouth bass (Micropterus salmoides), muskellunge (Esox masquinongy), northern pike (Esox lucius), rock bass (Ambloplites rupestris), smallmouth bass (Micropterus dolomieu), and walleye (Sander vitreus). The range of prey total lengths increased with predator total length. The median and maximum ingested prey total length varied with predator taxon and length, but generally ranged from 10-20% and 32-46% of predator total length, respectively. Predators tended to consume larger fusiform prey than laterally compressed prey. With the exception of large muskellunge, predators most commonly consumed prey between 16 and 73 mm. A sensitivity analysis indicated estimates can be very accurate at sample sizes greater than 1,000 diet items and fairly accurate at sample sizes greater than 100. However, sample sizes less than 50 should be evaluated with caution. Furthermore, median log10 predator-prey body mass ratios ranged from 1.9-2.5, nearly 50% lower than values previously reported for freshwater fishes. Managers, researchers, and modelers could use our findings as a tool for numerous predator-prey evaluations from stocking size optimization to individual-based bioenergetics analyses identifying prey size structure. To this end, we have developed a web-based user interface to maximize the utility of our models that can be found at www.LakeEcologyLab.org/pred_prey.
Waliszewski, Matthias W; Redlich, Ulf; Breul, Victor; Tautenhahn, Jörg
2017-04-30
The aim of this review is to present the available clinical and surrogate endpoints that may be used in future studies performed in patients with peripheral artery occlusive disease (PAOD). Importantly, we describe statistical limitations of the most commonly used endpoints and offer some guidance with respect to study design for a given sample size. The proposed endpoints may be used in studies using surgical or interventional revascularization and/or drug treatments. Considering recently published study endpoints and designs, the usefulness of these endpoints for reimbursement is evaluated. Based on these potential study endpoints and patient sample size estimates with different non-inferiority or tests for difference hypotheses, a rating relative to their corresponding reimbursement values is attempted. As regards the benefit for the patients and for the payers, walking distance and the ankle brachial index (ABI) are the most feasible endpoints in a relatively small study samples given that other non-vascular impact factors can be controlled. Angiographic endpoints such as minimal lumen diameter (MLD) do not seem useful from a reimbursement standpoint despite their intuitiveness. Other surrogate endpoints, such as transcutaneous oxygen tension measurements, have yet to be established as useful endpoints in reasonably sized studies with patients with critical limb ischemia (CLI). From a reimbursement standpoint, WD and ABI are effective endpoints for a moderate study sample size given that non-vascular confounding factors can be controlled.
Daaboul, George G; Lopez, Carlos A; Chinnala, Jyothsna; Goldberg, Bennett B; Connor, John H; Ünlü, M Selim
2014-06-24
Rapid, sensitive, and direct label-free capture and characterization of nanoparticles from complex media such as blood or serum will broadly impact medicine and the life sciences. We demonstrate identification of virus particles in complex samples for replication-competent wild-type vesicular stomatitis virus (VSV), defective VSV, and Ebola- and Marburg-pseudotyped VSV with high sensitivity and specificity. Size discrimination of the imaged nanoparticles (virions) allows differentiation between modified viruses having different genome lengths and facilitates a reduction in the counting of nonspecifically bound particles to achieve a limit-of-detection (LOD) of 5 × 10(3) pfu/mL for the Ebola and Marburg VSV pseudotypes. We demonstrate the simultaneous detection of multiple viruses in a single sample (composed of serum or whole blood) for screening applications and uncompromised detection capabilities in samples contaminated with high levels of bacteria. By employing affinity-based capture, size discrimination, and a "digital" detection scheme to count single virus particles, we show that a robust and sensitive virus/nanoparticle sensing assay can be established for targets in complex samples. The nanoparticle microscopy system is termed the Single Particle Interferometric Reflectance Imaging Sensor (SP-IRIS) and is capable of high-throughput and rapid sizing of large numbers of biological nanoparticles on an antibody microarray for research and diagnostic applications.
Mi, Michael Y.; Betensky, Rebecca A.
2013-01-01
Background Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Purpose Because the basic SPCD design already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and if we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Methods Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. Results From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample size re-estimation, up to 25% power was recovered from underestimated sample size scenarios. Limitations Given the numerous possible test parameters that could have been chosen for the simulations, the study’s results are limited to situations described by the parameters that were used, and may not generalize to all possible scenarios. Furthermore, drop-out of patients is not considered in this study. Conclusions It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments. PMID:23283576
Forssén, Patrik; Samuelsson, Jörgen; Fornstedt, Torgny
2014-06-20
In this study we investigated how the maximum productivity for commonly used, realistic separation system with a competitive Langmuir adsorption isotherm is affected by changes in column length, packing particle size, mobile phase viscosity, maximum allowed column pressure, column efficiency, sample concentration/solubility, selectivity, monolayer saturation capacity and retention factor of the first eluting compound. The study was performed by generating 1000 random separation systems whose optimal injection volume was determined, i.e., the injection volume that gives the largest achievable productivity. The relative changes in largest achievable productivity when one of the parameters above changes was then studied for each system and the productivity changes for all systems were presented as distributions. We found that it is almost always beneficial to use shorter columns with high pressure drops over the column and that the selectivity should be greater than 2. However, the sample concentration and column efficiency have very limited effect on the maximum productivity. The effect of packing particle size depends on the flow rate limiting factor. If the pumps maximum flow rate is the limiting factor use smaller packing, but if the pressure of the system is the limiting factor use larger packing up to about 40μm. Copyright © 2014 Elsevier B.V. All rights reserved.
Self-objectification and disordered eating: A meta-analysis.
Schaefer, Lauren M; Thompson, J Kevin
2018-06-01
Objectification theory posits that self-objectification increases risk for disordered eating. The current study sought to examine the relationship between self-objectification and disordered eating using meta-analytic techniques. Data from 53 cross-sectional studies (73 effect sizes) revealed a significant moderate positive overall effect (r = .39), which was moderated by gender, ethnicity, sexual orientation, and measurement of self-objectification. Specifically, larger effect sizes were associated with female samples and the Objectified Body Consciousness Scale. Effect sizes were smaller among heterosexual men and African American samples. Age, body mass index, country of origin, measurement of disordered eating, sample type and publication type were not significant moderators. Overall, results from the first meta-analysis to examine the relationship between self-objectification and disordered eating provide support for one of the major tenets of objectification theory and suggest that self-objectification may be a meaningful target in eating disorder interventions, though further work is needed to establish temporal and causal relationships. Findings highlight current gaps in the literature (e.g., limited representation of males, and ethnic and sexual minorities) with implications for guiding future research. © 2018 Wiley Periodicals, Inc.
The Scherrer equation and the dynamical theory of X-ray diffraction.
Muniz, Francisco Tiago Leitão; Miranda, Marcus Aurélio Ribeiro; Morilla Dos Santos, Cássio; Sasaki, José Marcos
2016-05-01
The Scherrer equation is a widely used tool to determine the crystallite size of polycrystalline samples. However, it is not clear if one can apply it to large crystallite sizes because its derivation is based on the kinematical theory of X-ray diffraction. For large and perfect crystals, it is more appropriate to use the dynamical theory of X-ray diffraction. Because of the appearance of polycrystalline materials with a high degree of crystalline perfection and large sizes, it is the authors' belief that it is important to establish the crystallite size limit for which the Scherrer equation can be applied. In this work, the diffraction peak profiles are calculated using the dynamical theory of X-ray diffraction for several Bragg reflections and crystallite sizes for Si, LaB6 and CeO2. The full width at half-maximum is then extracted and the crystallite size is computed using the Scherrer equation. It is shown that for crystals with linear absorption coefficients below 2117.3 cm(-1) the Scherrer equation is valid for crystallites with sizes up to 600 nm. It is also shown that as the size increases only the peaks at higher 2θ angles give good results, and if one uses peaks with 2θ > 60° the limit for use of the Scherrer equation would go up to 1 µm.
Genuine non-self-averaging and ultraslow convergence in gelation.
Cho, Y S; Mazza, M G; Kahng, B; Nagler, J
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
Masson, M; Angot, H; Le Bescond, C; Launay, M; Dabrin, A; Miège, C; Le Coz, J; Coquery, M
2018-05-10
Monitoring hydrophobic contaminants in surface freshwaters requires measuring contaminant concentrations in the particulate fraction (sediment or suspended particulate matter, SPM) of the water column. Particle traps (PTs) have been recently developed to sample SPM as cost-efficient, easy to operate and time-integrative tools. But the representativeness of SPM collected with PTs is not fully understood, notably in terms of grain size distribution and particulate organic carbon (POC) content, which could both skew particulate contaminant concentrations. The aim of this study was to evaluate the representativeness of SPM characteristics (i.e. grain size distribution and POC content) and associated contaminants (i.e. polychlorinated biphenyls, PCBs; mercury, Hg) in samples collected in a large river using PTs for differing hydrological conditions. Samples collected using PTs (n = 74) were compared with samples collected during the same time period by continuous flow centrifugation (CFC). The grain size distribution of PT samples shifted with increasing water discharge: the proportion of very fine silts (2-6 μm) decreased while that of coarse silts (27-74 μm) increased. Regardless of water discharge, POC contents were different likely due to integration by PT of high POC-content phytoplankton blooms or low POC-content flood events. Differences in PCBs and Hg concentrations were usually within the range of analytical uncertainties and could not be related to grain size or POC content shifts. Occasional Hg-enriched inputs may have led to higher Hg concentrations in a few PT samples (n = 4) which highlights the time-integrative capacity of the PTs. The differences of annual Hg and PCB fluxes calculated either from PT samples or CFC samples were generally below 20%. Despite some inherent limitations (e.g. grain size distribution bias), our findings suggest that PT sampling is a valuable technique to assess reliable spatial and temporal trends of particulate contaminants such as PCBs and Hg within a river monitoring network. Copyright © 2018 Elsevier B.V. All rights reserved.
The structured ancestral selection graph and the many-demes limit.
Slade, Paul F; Wakeley, John
2005-02-01
We show that the unstructured ancestral selection graph applies to part of the history of a sample from a population structured by restricted migration among subpopulations, or demes. The result holds in the limit as the number of demes tends to infinity with proportionately weak selection, and we have also made the assumptions of island-type migration and that demes are equivalent in size. After an instantaneous sample-size adjustment, this structured ancestral selection graph converges to an unstructured ancestral selection graph with a mutation parameter that depends inversely on the migration rate. In contrast, the selection parameter for the population is independent of the migration rate and is identical to the selection parameter in an unstructured population. We show analytically that estimators of the migration rate, based on pairwise sequence differences, derived under the assumption of neutrality should perform equally well in the presence of weak selection. We also modify an algorithm for simulating genealogies conditional on the frequencies of two selected alleles in a sample. This permits efficient simulation of stronger selection than was previously possible. Using this new algorithm, we simulate gene genealogies under the many-demes ancestral selection graph and identify some situations in which migration has a strong effect on the time to the most recent common ancestor of the sample. We find that a similar effect also increases the sensitivity of the genealogy to selection.
Time multiplexing super-resolution nanoscopy based on the Brownian motion of gold nanoparticles
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Wagner, Omer; Zalevsky, Zeev
2017-02-01
Super-resolution localization microscopy can overcome the diffraction limit and achieve a tens of order improvement in resolution. It requires labeling the sample with fluorescent probes followed with their repeated cycles of activation and photobleaching. This work presents an alternative approach that is free from direct labeling and does not require the activation and photobleaching cycles. Fluorescently labeled gold nanoparticles in a solution are distributed on top of the sample. The nanoparticles move in a random Brownian motion, and interact with the sample. By obscuring different areas in the sample, the nanoparticles encode the sub-wavelength features. A sequence of images of the sample is captured and decoded by digital post processing to create the super-resolution image. The achievable resolution is limited by the additive noise and the size of the nanoparticles. Regular nanoparticles with diameter smaller than 100nm are barely seen in a conventional bright field microscope, thus fluorescently labeled gold nanoparticles were used, with proper
Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.
Kubitzki, Marcus B; de Groot, Bert L
2007-06-15
Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.
Applying information theory to small groups assessment: emotions and well-being at work.
García-Izquierdo, Antonio León; Moreno, Blanca; García-Izquierdo, Mariano
2010-05-01
This paper explores and analyzes the relations between emotions and well-being in a sample of aviation personnel, passenger crew (flight attendants). There is an increasing interest in studying the influence of emotions and its role as psychosocial factors in the work environment as they are able to act as facilitators or shock absorbers. The contrast of the theoretical models by using traditional parametric techniques requires a large sample size to the efficient estimation of the coefficients that quantify the relations between variables. Since the available sample that we have is small, the most common size in European enterprises, we used the maximum entropy principle to explore the emotions that are involved in the psychosocial risks. The analyses show that this method takes advantage of the limited information available and guarantee an optimal estimation, the results of which are coherent with theoretical models and numerous empirical researches about emotions and well-being.
Ciguatoxic Potential of Brown-Marbled Grouper in Relation to Fish Size and Geographical Origin
Chan, Thomas Y. K.
2015-01-01
To determine the ciguatoxic potential of brown-marbled grouper (Epinephelus fuscoguttatus) in relation to fish size and geographical origin, this review systematically analyzed: 1) reports of large ciguatera outbreaks and outbreaks with description of the fish size; 2) Pacific ciguatoxin (P-CTX) profiles and levels and mouse bioassay results in fish samples from ciguatera incidents; 3) P-CTX profiles and levels and risk of toxicity in relation to fish size and origin; 4) regulatory measures restricting fish trade and fish size preference of the consumers. P-CTX levels in flesh and size dependency of toxicity indicate that the risk of ciguatera after eating E. fuscoguttatus varies with its geographical origin. For a large-sized grouper, it is necessary to establish legal size limits and control measures to protect public health and prevent overfishing. More risk assessment studies are required for E. fuscoguttatus to determine the size threshold above which the risk of ciguatera significantly increases. PMID:26324735
Optimal Inspection of Imports to Prevent Invasive Pest Introduction.
Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G
2018-03-01
The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.
High-Field Liquid-State Dynamic Nuclear Polarization in Microliter Samples.
Yoon, Dongyoung; Dimitriadis, Alexandros I; Soundararajan, Murari; Caspers, Christian; Genoud, Jeremy; Alberti, Stefano; de Rijk, Emile; Ansermet, Jean-Philippe
2018-05-01
Nuclear hyperpolarization in the liquid state by dynamic nuclear polarization (DNP) has been of great interest because of its potential use in NMR spectroscopy of small samples of biological and chemical compounds in aqueous media. Liquid state DNP generally requires microwave resonators in order to generate an alternating magnetic field strong enough to saturate electron spins in the solution. As a consequence, the sample size is limited to dimensions of the order of the wavelength, and this restricts the sample volume to less than 100 nL for DNP at 9 T (∼260 GHz). We show here a new approach that overcomes this sample size limitation. Large saturation of electron spins was obtained with a high-power (∼150 W) gyrotron without microwave resonators. Since high power microwaves can cause serious dielectric heating in polar solutions, we designed a planar probe which effectively alleviates dielectric heating. A thin liquid sample of 100 μm of thickness is placed on a block of high thermal conductivity aluminum nitride, with a gold coating that serves both as a ground plane and as a heat sink. A meander or a coil were used for NMR. We performed 1 H DNP at 9.2 T (∼260 GHz) and at room temperature with 10 μL of water, a volume that is more than 100× larger than reported so far. The 1 H NMR signal is enhanced by a factor of about -10 with 70 W of microwave power. We also demonstrated the liquid state of 31 P DNP in fluorobenzene containing triphenylphosphine and obtained an enhancement of ∼200.
Till, J.L.; Jackson, M.J.; Rosenbaum, J.G.; Solheid, P.
2011-01-01
The Tiva Canyon Tuff contains dispersed nanoscale Fe-Ti-oxide grains with a narrow magnetic grain size distribution, making it an ideal material in which to identify and study grain-size-sensitive magnetic behavior in rocks. A detailed magnetic characterization was performed on samples from the basal 5 m of the tuff. The magnetic materials in this basal section consist primarily of (low-impurity) magnetite in the form of elongated submicron grains exsolved from volcanic glass. Magnetic properties studied include bulk magnetic susceptibility, frequency-dependent and temperature-dependent magnetic susceptibility, anhysteretic remanence acquisition, and hysteresis properties. The combined data constitute a distinct magnetic signature at each stratigraphic level in the section corresponding to different grain size distributions. The inferred magnetic domain state changes progressively upward from superparamagnetic grains near the base to particles with pseudo-single-domain or metastable single-domain characteristics near the top of the sampled section. Direct observations of magnetic grain size confirm that distinct transitions in room temperature magnetic susceptibility and remanence probably denote the limits of stable single-domain behavior in the section. These results provide a unique example of grain-size-dependent magnetic properties in noninteracting particle assemblages over three decades of grain size, including close approximations of ideal Stoner-Wohlfarth assemblages, and may be considered a useful reference for future rock magnetic studies involving grain-size-sensitive properties.
Fleming, A; Schenkel, F S; Koeck, A; Malchiodi, F; Ali, R A; Corredig, M; Mallard, B; Sargolzaei, M; Miglior, F
2017-05-01
The objective of this study was to estimate the heritability of milk fat globule (MFG) size and mid-infrared (MIR) predicted MFG size in Holstein cattle. The genetic correlations between measured and predicted MFG size with milk fat and protein percentage were also investigated. Average MFG size was measured in 1,583 milk samples taken from 254 Holstein cows from 29 herds across Canada. Size was expressed as volume moment mean (D[4,3]) and surface moment mean (D[3,2]). Analyzed milk samples also had average MFG size predicted from their MIR spectral records. Fat and protein percentages were obtained for all test-day milk samples in the cow's lactation. Univariate and bivariate repeatability animal models were used to estimate heritability and genetic correlations. Moderate heritabilities of 0.364 and 0.466 were found for D[4,3] and D[3,2], respectively, and a strong genetic correlation was found between the 2 traits (0.98). The heritabilities for the MIR-predicted MFG size were lower than those estimated for the measured MFG size at 0.300 for predicted D[4,3] and 0.239 for predicted D[3,2]. The genetic correlation between measured and predicted D[4,3] was 0.685; the correlation was slightly higher between measured and predicted D[3,2] at 0.764, likely due to the better prediction accuracy of D[3,2]. Milk fat percentage had moderate genetic correlations with both D[4,3] and D[3,2] (0.538 and 0.681, respectively). The genetic correlation between predicted MFG size and fat percentage was much stronger (greater than 0.97 for both predicted D[4,3] and D[3,2]). The stronger correlation suggests a limitation for the use of the predicted values of MFG size as indicator traits for true average MFG size in milk in selection programs. Larger samples sizes are required to provide better evidence of the estimated genetic parameters. A genetic component appears to exist for the average MFG size in bovine milk, and the variation could be exploited in selection programs. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Enhancing the Damping Behavior of Dilute Zn-0.3Al Alloy by Equal Channel Angular Pressing
NASA Astrophysics Data System (ADS)
Demirtas, M.; Atli, K. C.; Yanar, H.; Purcek, G.
2017-06-01
The effect of grain size on the damping capacity of a dilute Zn-0.3Al alloy was investigated. It was found that there was a critical strain value (≈1 × 10-4) below and above which damping of Zn-0.3Al showed dynamic and static/dynamic hysteresis behavior, respectively. In the dynamic hysteresis region, damping resulted from viscous sliding of phase/grain boundaries, and decreasing grain size increased the damping capacity. While the quenched sample with 100 to 250 µm grain size showed very limited damping capacity with a loss factor tanδ of less than 0.007, decreasing grain size down to 2 µm by equal channel angular pressing (ECAP) increased tanδ to 0.100 in this region. Dynamic recrystallization due to microplasticity at the sample surface was proposed as the damping mechanism for the first time in the region where the alloy showed the combined aspects of dynamic and static hysteresis damping. In this region, tanδ increased with increasing strain amplitude, and ECAPed sample showed a tanδ value of 0.256 at a strain amplitude of 2 × 10-3, the highest recorded so far in the damping capacity-related studies on ZA alloys.
Use of synchrotron tomography to image naturalistic anatomy in insects
NASA Astrophysics Data System (ADS)
Socha, John J.; De Carlo, Francesco
2008-08-01
Understanding the morphology of anatomical structures is a cornerstone of biology. For small animals, classical methods such as histology have provided a wealth of data, but such techniques can be problematic due to destruction of the sample. More importantly, fixation and physical slicing can cause deformation of anatomy, a critical limitation when precise three-dimensional data are required. Modern techniques such as confocal microscopy, MRI, and tabletop x-ray microCT provide effective non-invasive methods, but each of these tools each has limitations including sample size constraints, resolution limits, and difficulty visualizing soft tissue. Our research group at the Advanced Photon Source (Argonne National Laboratory) studies physiological processes in insects, focusing on the dynamics of breathing and feeding. To determine the size, shape, and relative location of internal anatomy in insects, we use synchrotron microtomography at the beamline 2-BM to image structures including tracheal tubes, muscles, and gut. Because obtaining naturalistic, undeformed anatomical information is a key component of our studies, we have developed methods to image fresh and non-fixed whole animals and tissues. Although motion artifacts remain a problem, we have successfully imaged multiple species including beetles, ants, fruit flies, and butterflies. Here we discuss advances in biological imaging and highlight key findings in insect morphology.
Fall prevention in high-risk patients.
Shuey, Kathleen M; Balch, Christine
2014-12-01
In the oncology population, disease process and treatment factors place patients at risk for falls. Fall bundles provide a framework for developing comprehensive fall programs in oncology. Small sample size of interventional studies and focus on ambulatory and geriatric populations limit the applicability of results. Additional research is needed. Copyright © 2014 Elsevier Inc. All rights reserved.
Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study du...
Modeling Reading Growth in Grades 3 to 5 with an Alternate Assessment
ERIC Educational Resources Information Center
Farley, Dan; Anderson, Daniel; Irvin, P. Shawn; Tindal, Gerald
2017-01-01
Modeling growth for students with significant cognitive disabilities (SWSCD) is difficult due to a variety of factors, including, but not limited to, missing data, test scaling, group heterogeneity, and small sample sizes. These challenges may account for the paucity of previous research exploring the academic growth of SWSCD. Our study represents…
Characterizing dispersal patterns in a threatened seabird with limited genetic structure
Laurie A. Hall; Per J. Palsboll; Steven R. Beissinger; James T. Harvey; Martine Berube; Martin G. Raphael; Kim Nelson; Richard T. Golightly; Laura McFarlane-Tranquilla; Scott H. Newman; M. Zachariah Peery
2009-01-01
Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age classes. We demonstrate that assignment methods can be rigorously used...
Genetic sampling of Palmer's chipmunks in the Spring Mountains, Nevada
Kevin S. McKelvey; Jennifer E. Ramirez; Kristine L. Pilgrim; Samuel A. Cushman; Michael K. Schwartz
2013-01-01
Palmer's chipmunk (Neotamias palmeri) is a medium-sized chipmunk whose range is limited to the higher-elevation areas of the Spring Mountain Range, Nevada. A second chipmunk species, the Panamint chipmunk (Neotamias panamintinus), is more broadly distributed and lives in lower-elevation, primarily pinyon-juniper (Pinus monophylla-Juniperus osteosperma) habitat...
Applications of Small Area Estimation to Generalization with Subclassification by Propensity Scores
ERIC Educational Resources Information Center
Chan, Wendy
2018-01-01
Policymakers have grown increasingly interested in how experimental results may generalize to a larger population. However, recently developed propensity score-based methods are limited by small sample sizes, where the experimental study is generalized to a population that is at least 20 times larger. This is particularly problematic for methods…
An IRT Analysis of Preservice Teacher Self-Efficacy in Technology Integration
ERIC Educational Resources Information Center
Browne, Jeremy
2011-01-01
The need for rigorously developed measures of preservice teacher traits regarding technology integration training has been acknowledged (Kay 2006), but such instruments are still extremely rare. The Technology Integration Confidence Scale (TICS) represents one such measure, but past analyses of its functioning have been limited by sample size and…
Modeling Valuations from Experience: A Comment on Ashby and Rakow (2014)
ERIC Educational Resources Information Center
Wulff, Dirk U.; Pachur, Thorsten
2016-01-01
What are the cognitive mechanisms underlying subjective valuations formed on the basis of sequential experiences of an option's possible outcomes? Ashby and Rakow (2014) have proposed a sliding window model (SWIM), according to which people's valuations represent the average of a limited sample of recent experiences (the size of which is estimated…
The application of nirvana to silvicultural studies
Chi-Leung So; Thomas Elder; Leslie Groom; John S. Kush; Jennifer Myszewski; Todd Shupe
2006-01-01
Previous results from this laboratory have shown that near infrared (NIR) spectroscopy, coupled with multivariate analysis, can be a powerful tool for the prediction of wood quality. While wood quality measurements are of utility, their determination can be both time and labor intensive, thus limiting their use where large sample sizes are concerned. This paper will...
USDA-ARS?s Scientific Manuscript database
Very few genetic variants have been associated with depression and neuroticism, likely because of limitations on sample size in previous studies. Subjective well-being, a phenotype that is genetically correlated with both of these traits, has not yet been studied with genome-wide data. We conducted ...
Barkhofen, Sonja; Bartley, Tim J; Sansoni, Linda; Kruse, Regina; Hamilton, Craig S; Jex, Igor; Silberhorn, Christine
2017-01-13
Sampling the distribution of bosons that have undergone a random unitary evolution is strongly believed to be a computationally hard problem. Key to outperforming classical simulations of this task is to increase both the number of input photons and the size of the network. We propose driven boson sampling, in which photons are input within the network itself, as a means to approach this goal. We show that the mean number of photons entering a boson sampling experiment can exceed one photon per input mode, while maintaining the required complexity, potentially leading to less stringent requirements on the input states for such experiments. When using heralded single-photon sources based on parametric down-conversion, this approach offers an ∼e-fold enhancement in the input state generation rate over scattershot boson sampling, reaching the scaling limit for such sources. This approach also offers a dramatic increase in the signal-to-noise ratio with respect to higher-order photon generation from such probabilistic sources, which removes the need for photon number resolution during the heralding process as the size of the system increases.
Evaluation of aeolian emissions from gold mine tailings on the Witwatersrand
NASA Astrophysics Data System (ADS)
Ojelede, M. E.; Annegarn, H. J.; Kneen, M. A.
2012-01-01
The Witwatersrand is known for the high frequency of aeolian dust storm episodes arising from gold mine tailings storage facilities (TSFs). Source and ambient atmosphere are poorly characterized from the point of view of particle size distribution and human health risk assessment. For years, routine monitoring was limited to sampling of dust fallout ⩾30 μm. Sampling and analyses of source and receptor material was conducted. Thirty-two bulk soils were collected from TSF along the east-west mining corridor, and size distribution analysis was performed in the range 0.05-900 μm using a Malvern® MS-14 Particle Size Analyser. Ambient aerosols in the range 0.25-32 μm were monitored at two separate locations using a Grimm® aerosol monitor, in the vicinity of three large currently active and a dormant TSF. Statistical analyses indicate that TSFs are rich in fine erodible materials, particularly active TSFs. Concentration of ⩽PM5 and ⩽PM10 components in source material was: recent slimes (14-24 vol.%; 22-38 vol.%), older slimes (6-17 vol.%; 11-26 vol.%) and sand (1-8 vol.%; 2-12 vol.%). Concentrations of airborne aerosols were below the South African Department of Environmental Affairs 24-h limit value of 120 μg m -3. With wind speeds exceeding 7 ms -1, ambient concentration reached 2160 μg m -3. This maximum is several times higher than the limit value. Erosion of tailings storage facilities is a strong driver influencing ambient particulate matter loading with adverse health implications for nearby residents.
Transition from Forward Smoldering to Flaming in Small Polyurethane Foam Samples
NASA Technical Reports Server (NTRS)
Bar-Ilan, A.; Putzeys, O.; Rein, G.; Fernandez-Pello, A. C.
2004-01-01
Experimental observations are presented of the effect of the flow velocity and oxygen concentration, and of a thermal radiant flux, on the transition from smoldering to flaming in forward smoldering of small samples of polyurethane foam with a gas/solid interface. The experiments are part of a project studying the transition from smolder to flaming under conditions encountered in spacecraft facilities, i.e., microgravity, low velocity variable oxygen concentration flows. Because the microgravity experiments are planned for the International Space Station, the foam samples had to be limited in size for safety and launch mass reasons. The feasible sample size is too small for smolder to self propagate because of heat losses to the surrounding environment. Thus, the smolder propagation and the transition to flaming had to be assisted by reducing the heat losses to the surroundings and increasing the oxygen concentration. The experiments are conducted with small parallelepiped samples vertically placed in a wind tunnel. Three of the sample lateral-sides are maintained at elevated temperature and the fourth side is exposed to an upward flow and to a radiant flux. It is found that decreasing the flow velocity and increasing its oxygen concentration, and/or increasing the radiant flux enhances the transition to flaming, and reduces the delay time to transition. Limiting external ambient conditions for the transition to flaming are reported for the present experimental set-up. The results show that smolder propagation and the transition to flaming can occur in relatively small fuel samples if the external conditions are appropriate. The results also indicate that transition to flaming occurs in the char left behind by the smolder reaction, and it has the characteristics of a gas-phase ignition induced by the smolder reaction, which acts as the source of both gaseous fuel and heat.
The Influences of Soil Characteristics on Nest-Site Selection in Painted Turtles (Chrysemys picta)
NASA Astrophysics Data System (ADS)
Page, R.
2017-12-01
A variety of animals dig nests and lay their eggs in soil, leaving them to incubate and hatch without assistance from the parents. Nesting habitat is important for these organisms many of which exhibit temperature dependent sex determination (TSD) whereby the incubation temperature determines the sex of each hatchling. However, suitable nesting habitat may be limited due to anthropogenic activities and global temperature increases. Soil thermal properties are critical to these organisms and are positively correlated with water retention and soil carbon; carbon-rich soils result in higher incubation temperatures. We investigated nest-site selection in painted turtles (Chrysemys picta) inhabiting an anthropogenic pond in south central Pennsylvania. We surveyed for turtle nests and documented location, depth, width, temperature, canopy coverage, clutch size, and hatch success for a total of 31 turtle nests. To address the influence of soil carbon and particle size on nest selection, we analyzed samples collected from: 1) actual nests that were depredated, 2) false nests, incomplete nests aborted during digging prior to nest completion, and 3) randomized locations. Soil samples were separated into coarse, medium, and fine grain size fractions through a stack of sieves. Samples were combusted in a total carbon analyzer to measure weight percent organic carbon. We found that anthropogenic activity at this site has created homogenous, sandy, compacted soils at the uppermost layer that may limit females' access to appropriate nesting habitat. Turtle nesting activity was limited to a linear region north of the pond and was constrained by an impassable rail line. Relative to other studies, turtle nests were notably shallow (5.8±0.9 cm) and placed close to the pond. Compared to false nests and random locations, turtle-selected sites averaged greater coarse grains (35% compared to 20.24 and 20.57%) and less fine grains (47% compared to 59 and 59, respectively). Despite remarkably high soil carbon along the rail line (47.08%) turtles nested here with slightly higher hatch success. We suggest that the turtles are limited to sandy, compact soils with low heat capacities and may compensate for this by also nesting adjacent to the rail line where high soil carbon could increase incubation temperatures.
Horowitz, A.J.; Smith, J.J.; Elrick, K.A.
2001-01-01
A prototype 14-L Teflon? churn splitter was evaluated for whole-water sample-splitting capabilities over a range of sediment concentratons and grain sizes as well as for potential chemical contamination from both organic and inorganic constituents. These evaluations represent a 'best-case' scenario because they were performed in the controlled environment of a laboratory, and used monomineralic silica sand slurries of known concentration made up in deionized water. Further, all splitting was performed by a single operator, and all the requisite concentration analyses were performed by a single laboratory. The prototype Teflon? churn splitter did not appear to supply significant concentrations of either organic or inorganic contaminants at current U.S. Geological Survey (USGS) National Water Quality Laboratory detection and reporting limits when test samples were prepared using current USGS protocols. As with the polyethylene equivalent of the prototype Teflon? churn, the maximum usable whole-water suspended sediment concentration for the prototype churn appears to lie between 1,000 and 10,000 milligrams per liter (mg/L). Further, the maximum grain-size limit appears to lie between 125- and 250-microns (m). Tests to determine the efficacy of the valve baffle indicate that it must be retained to facilitate representative whole-water subsampling.
Selecting the optimum plot size for a California design-based stream and wetland mapping program.
Lackey, Leila G; Stein, Eric D
2014-04-01
Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.
Generation of sub-femtoliter droplet by T-junction splitting on microfluidic chips
NASA Astrophysics Data System (ADS)
Yang, Yu-Jun; Feng, Xuan; Xu, Na; Pang, Dai-Wen; Zhang, Zhi-Ling
2013-03-01
In the paper, sub-femtoliter droplets were easily produced by droplet splitting at a simple T-junction with orifice, which did not need expensive equipments, complex photolithography skill, or high energy input. The volume of the daughter droplet was not limited by channel size but controlled by channel geometry and fluidic characteristic. Moreover, single bead sampling and bead quantification in different orders of magnitude of droplet volumes were investigated. The droplets split at our T-junction chip had small volume and monodispersed size and could be produced efficiently, orderly, and controllably.
An empirical Bayes approach to analyzing recurring animal surveys
Johnson, D.H.
1989-01-01
Recurring estimates of the size of animal populations are often required by biologists or wildlife managers. Because of cost or other constraints, estimates frequently lack the accuracy desired but cannot readily be improved by additional sampling. This report proposes a statistical method employing empirical Bayes (EB) estimators as alternatives to those customarily used to estimate population size, and evaluates them by a subsampling experiment on waterfowl surveys. EB estimates, especially a simple limited-translation version, were more accurate and provided shorter confidence intervals with greater coverage probabilities than customary estimates.
Incorporating Biological Knowledge into Evaluation of Casual Regulatory Hypothesis
NASA Technical Reports Server (NTRS)
Chrisman, Lonnie; Langley, Pat; Bay, Stephen; Pohorille, Andrew; DeVincenzi, D. (Technical Monitor)
2002-01-01
Biological data can be scarce and costly to obtain. The small number of samples available typically limits statistical power and makes reliable inference of causal relations extremely difficult. However, we argue that statistical power can be increased substantially by incorporating prior knowledge and data from diverse sources. We present a Bayesian framework that combines information from different sources and we show empirically that this lets one make correct causal inferences with small sample sizes that otherwise would be impossible.
Reschiglian, P; Roda, B; Zattoni, A; Tanase, M; Marassi, V; Serani, S
2014-02-01
The rapid development of protein-based pharmaceuticals highlights the need for robust analytical methods to ensure their quality and stability. Among proteins used in pharmaceutical applications, an important and ever increasing role is represented by monoclonal antibodies and large proteins, which are often modified to enhance their activity or stability when used as drugs. The bioactivity and the stability of those proteins are closely related to the maintenance of their complex structure, which however are influenced by many external factors that can cause degradation and/or aggregation. The presence of aggregates in these drugs could reduce their bioactivity and bioavailability, and induce immunogenicity. The choice of the proper analytical method for the analysis of aggregates is fundamental to understand their (size) dimensional range, their amount, and if they are present in the sample as generated by an aggregation or as an artifact due to the method itself. Size exclusion chromatography is one of the most important techniques for the quality control of pharmaceutical proteins; however, its application is limited to relatively low molar mass aggregates. Among the techniques for the size characterization of proteins, field-flow fractionation (FFF) represents a competitive choice because of its soft mechanism due to the absence of a stationary phase and application in a broader size range, from nanometer- to micrometer-sized analytes. In this paper, the microcolumn variant of FFF, the hollow-fiber flow FFF, was online coupled with multi-angle light scattering, and a method for the characterization of aggregates with high reproducibility and low limit of detection was demonstrated employing an avidin derivate as sample model.
2010-01-01
Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152
Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi
2011-04-01
Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.
2011-01-01
Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial. Trial registration Current Controlled Trials ISRCTN61649292 PMID:21288326
Pralatnet, Sasithorn; Poapolathep, Saranya; Giorgi, Mario; Imsilp, Kanjana; Kumagai, Susumu; Poapolathep, Amnart
2016-07-01
One hundred wheat product samples (50 instant noodle samples and 50 bread samples) were collected from supermarkets in Bangkok, Thailand. Deoxynivalenol (DON) and aflatoxin B1 (AFB1) contamination in these products was analyzed using a validated liquid chromatography-tandem mass spectrometry method. The limit of quantification values of DON and AFB1 in the instant noodles and bread were 2 and 1 ng g(-1), respectively. The survey found that DON was quantifiable in 40% of collected samples, in 2% of noodles (0.089 μg g(-1)), and in 78% of breads (0.004 to 0.331 μg g(-1)). AFB1 was below the limit of quantification of the method in all of the tested samples. The results suggest that the risk of DON exposure via noodles and breads is very low in urban areas of Thailand. No risk can be attributable to AFB1 exposure in the same food matrices, but further studies with a larger sample size are needed to confirm these data.
Thermal conductivity measurements of particulate materials under Martian conditions
NASA Technical Reports Server (NTRS)
Presley, M. A.; Christensen, P. R.
1993-01-01
The mean particle diameter of surficial units on Mars has been approximated by applying thermal inertia determinations from the Mariner 9 Infrared Radiometer and the Viking Infrared Thermal Mapper data together with thermal conductivity measurement. Several studies have used this approximation to characterize surficial units and infer their nature and possible origin. Such interpretations are possible because previous measurements of the thermal conductivity of particulate materials have shown that particle size significantly affects thermal conductivity under martian atmospheric pressures. The transfer of thermal energy due to collisions of gas molecules is the predominant mechanism of thermal conductivity in porous systems for gas pressures above about 0.01 torr. At martian atmospheric pressures the mean free path of the gas molecules becomes greater than the effective distance over which conduction takes place between the particles. Gas particles are then more likely to collide with the solid particles than they are with each other. The average heat transfer distance between particles, which is related to particle size, shape and packing, thus determines how fast heat will flow through a particulate material.The derived one-to-one correspondence of thermal inertia to mean particle diameter implies a certain homogeneity in the materials analyzed. Yet the samples used were often characterized by fairly wide ranges of particle sizes with little information about the possible distribution of sizes within those ranges. Interpretation of thermal inertia data is further limited by the lack of data on other effects on the interparticle spacing relative to particle size, such as particle shape, bimodal or polymodal mixtures of grain sizes and formation of salt cements between grains. To address these limitations and to provide a more comprehensive set of thermal conductivities vs. particle size a linear heat source apparatus, similar to that of Cremers, was assembled to provide a means of measuring the thermal conductivity of particulate samples. In order to concentrate on the dependence of the thermal conductivity on particle size, initial runs will use spherical glass beads that are precision sieved into relatively small size ranges and thoroughly washed.
Ensemble representations: effects of set size and item heterogeneity on average size perception.
Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W
2013-02-01
Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.
2001-01-24
Typical metal sample that was processed by TEMPUS (Tiegelfreies Elektromagnetisches Prozessieren Unter Schwerelosigkeit), an electromagnetic levitation facility developed by German researchers and flown on the IML-2 and MSL-1 and 1R Spacelab missions. Electromagnetic levitation is used commonly in ground-based experiments to melt and then cool metallic melts below their freezing points without solidification occurring. Sample size is limited in ground-based experiments. Research with TEMPUS aboard Spacelab allowed scientists to study the viscosity, surface tension, and other properties of several metals and alloys while undercooled (i.e., cooled below their normal solidification points). The sample is about 1 cm (2/5 inch) in diameter.
Effects of grinding processes on enzymatic degradation of wheat straw.
Silva, Gabriela Ghizzi D; Couturier, Marie; Berrin, Jean-Guy; Buléon, Alain; Rouau, Xavier
2012-01-01
The effectiveness of wheat straw fine to ultra-fine grindings at pilot scale was studied. The produced powders were characterised by their particle-size distribution (laser diffraction), crystallinity (WAXS) and enzymatic degradability (Trichoderma reesei enzymatic cocktail). A large range of wheat-straw powders was produced: from coarse (median particle size ∼800 μm) to fine particles (∼50 μm) using sieve-based grindings, then ultra-fine particles ∼20 μm by jet milling and ∼10 μm by ball milling. The wheat straw degradability was enhanced by the decrease of particle size until a limit: ∼100 μm, up to 36% total carbohydrate and 40% glucose hydrolysis yields. Ball milling samples overcame this limit up to 46% total carbohydrate and 72% glucose yields as a consequence of cellulose crystallinity reduction (from 22% to 13%). Ball milling appeared to be an effective pretreatment with similar glucose yield and superior carbohydrate yield compared to steam explosion pretreatment. Copyright © 2011 Elsevier Ltd. All rights reserved.
Light-scattering flow cytometry for identification and characterization of blood microparticles
NASA Astrophysics Data System (ADS)
Konokhova, Anastasiya I.; Yurkin, Maxim A.; Moskalensky, Alexander E.; Chernyshev, Andrei V.; Tsvetovskaya, Galina A.; Chikova, Elena D.; Maltsev, Valeri P.
2012-05-01
We describe a novel approach to study blood microparticles using the scanning flow cytometer, which measures light scattering patterns (LSPs) of individual particles. Starting from platelet-rich plasma, we separated spherical microparticles from non-spherical plasma constituents, such as platelets and cell debris, based on similarity of their LSP to that of sphere. This provides a label-free method for identification (detection) of microparticles, including those larger than 1 μm. Next, we rigorously characterized each measured particle, determining its size and refractive index including errors of these estimates. Finally, we employed a deconvolution algorithm to determine size and refractive index distributions of the whole population of microparticles, accounting for largely different reliability of individual measurements. Developed methods were tested on a blood sample of a healthy donor, resulting in good agreement with literature data. The only limitation of this approach is size detection limit, which is currently about 0.5 μm due to used laser wavelength of 0.66 μm.
Adjemian, Jennifer C Z; Girvetz, Evan H; Beckett, Laurel; Foley, Janet E
2006-01-01
More than 20 species of fleas in California are implicated as potential vectors of Yersinia pestis. Extremely limited spatial data exist for plague vectors-a key component to understanding where the greatest risks for human, domestic animal, and wildlife health exist. This study increases the spatial data available for 13 potential plague vectors by using the ecological niche modeling system Genetic Algorithm for Rule-Set Production (GARP) to predict their respective distributions. Because the available sample sizes in our data set varied greatly from one species to another, we also performed an analysis of the robustness of GARP by using the data available for flea Oropsylla montana (Baker) to quantify the effects that sample size and the chosen explanatory variables have on the final species distribution map. GARP effectively modeled the distributions of 13 vector species. Furthermore, our analyses show that all of these modeled ranges are robust, with a sample size of six fleas or greater not significantly impacting the percentage of the in-state area where the flea was predicted to be found, or the testing accuracy of the model. The results of this study will help guide the sampling efforts of future studies focusing on plague vectors.
The Role of Remote Sensing in Assessing Forest Biomass in Appalachian South Carolina
NASA Technical Reports Server (NTRS)
Shain, W.; Nix, L.
1982-01-01
Information is presented on the use of color infrared aerial photographs and ground sampling methods to quantify standing forest biomass in Appalachian South Carolina. Local tree biomass equations are given and subsequent evaluation of stand density and size classes using remote sensing methods is presented. Methods of terrain analysis, environmental hazard rating, and subsequent determination of accessibility of forest biomass are discussed. Computer-based statistical analyses are used to expand individual cover-type specific ground sample data to area-wide cover type inventory figures based on aerial photographic interpretation and area measurement. Forest biomass data are presented for the study area in terms of discriminant size classes, merchantability limits, accessibility (as related to terrain and yield/harvest constraints), and potential environmental impact of harvest.
Pore size distribution and supercritical hydrogen adsorption in activated carbon fibers
NASA Astrophysics Data System (ADS)
Purewal, J. J.; Kabbour, H.; Vajo, J. J.; Ahn, C. C.; Fultz, B.
2009-05-01
Pore size distributions (PSD) and supercritical H2 isotherms have been measured for two activated carbon fiber (ACF) samples. The surface area and the PSD both depend on the degree of activation to which the ACF has been exposed. The low-surface-area ACF has a narrow PSD centered at 0.5 nm, while the high-surface-area ACF has a broad distribution of pore widths between 0.5 and 2 nm. The H2 adsorption enthalpy in the zero-coverage limit depends on the relative abundance of the smallest pores relative to the larger pores. Measurements of the H2 isosteric adsorption enthalpy indicate the presence of energy heterogeneity in both ACF samples. Additional measurements on a microporous, coconut-derived activated carbon are presented for reference.
Comparison. US P-61 and Delft sediment samplers
Beverage, Joseph P.; Williams, David T.
1990-01-01
The Delft Bottle (DB) is a flow-through device designed by the Delft Hydraulic Laboratory (DHL), The Netherlands, to sample sand-sized sediment suspended in streams. The US P-61 sampler was designed by the Federal Interagency Sedimentation Project (FISP) at the St. Anthony Falls Hydraulic Laboratory, Minneapolis, Minnesota, to collect suspended sediment from deep, swift rivers. The results of two point-sampling tests in the United States, the Mississippi River near Vicksburg, Mississippi, in 1983 and the Colorado River near Blythe, California, in 1984, are provided in this report. These studies compare sand-transport rates, rather than total sediment-transport rates, because fine material washes through the DB sampler. In the United States, the commonly used limits for sand-sized material are 0.062 mm to 2.00 mm (Vanoni 1975).
Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S
2016-11-01
For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Pore size distribution and supercritical hydrogen adsorption in activated carbon fibers.
Purewal, J J; Kabbour, H; Vajo, J J; Ahn, C C; Fultz, B
2009-05-20
Pore size distributions (PSD) and supercritical H2 isotherms have been measured for two activated carbon fiber (ACF) samples. The surface area and the PSD both depend on the degree of activation to which the ACF has been exposed. The low-surface-area ACF has a narrow PSD centered at 0.5 nm, while the high-surface-area ACF has a broad distribution of pore widths between 0.5 and 2 nm. The H2 adsorption enthalpy in the zero-coverage limit depends on the relative abundance of the smallest pores relative to the larger pores. Measurements of the H2 isosteric adsorption enthalpy indicate the presence of energy heterogeneity in both ACF samples. Additional measurements on a microporous, coconut-derived activated carbon are presented for reference.
The kilometer-sized Main Belt asteroid population revealed by Spitzer
NASA Astrophysics Data System (ADS)
Ryan, E. L.; Mizuno, D. R.; Shenoy, S. S.; Woodward, C. E.; Carey, S. J.; Noriega-Crespo, A.; Kraemer, K. E.; Price, S. D.
2015-06-01
Aims: Multi-epoch Spitzer Space Telescope 24 μm data is utilized from the MIPSGAL and Taurus Legacy surveys to detect asteroids based on their relative motion. Methods: Infrared detections are matched to known asteroids and average diameters and albedos are derived using the near Earth asteroid thermal model (NEATM) for 1865 asteroids ranging in size from 0.2 to 169 km. A small subsample of these objects was also detected by IRAS or MSX and the single wavelength albedo and diameter fits derived from these data are within the uncertainties of the IRAS and/or MSX derived albedos and diameters and available occultation diameters, which demonstrates the robustness of our technique. Results: The mean geometric albedo of the small Main Belt asteroids in this sample is pV = 0.134 with a sample standard deviation of 0.106. The albedo distribution of this sample is far more diverse than the IRAS or MSX samples. The cumulative size-frequency distribution of asteroids in the Main Belt at small diameters is directly derived and a 3σ deviation from the fitted size-frequency distribution slope is found near 8 km. Completeness limits of the optical and infrared surveys are discussed. Tables 1-3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A42
Diffraction limited focusing and routing of gap plasmons by a metal-dielectric-metal lens
Dennis, Brian S.; Czaplewski, David A.; Haftel, Michael I.; ...
2015-08-12
Passive optical elements can play key roles in photonic applications such as plasmonic integrated circuits. Here we experimentally demonstrate passive gap-plasmon focusing and routing in two-dimensions. This is accomplished using a high numerical-aperture metal-dielectric-metal lens incorporated into a planar-waveguide device. Fabrication via metal sputtering, oxide deposition, electron- and focused-ion- beam lithography, and argon ion-milling is reported on in detail. Diffraction-limited focusing is optically characterized by sampling out-coupled light with a microscope. The measured focal distance and full-width-half-maximum spot size agree well with the calculated lens performance. The surface plasmon polariton propagation length is measured by sampling light from multiple out-couplermore » slits.« less
Riemer, Michael F.; Collins, Brian D.; Badger, Thomas C.; Toth, Csilla; Yu, Yat Chun
2015-01-01
This report provides a description of the methods used to obtain and test the intact soil stratigraphy behind the headscarp of the March 22 landslide. Detailed geotechnical index testing results are presented for 24 soil samples representing the stratigraphy at 19 different depths along a 650 ft (198 m) soil profile. The results include (1) the soil's in situ water content and unit weight (where applicable); (2) specific gravity of soil solids; and (3) each sample's grain-size distribution, critical limits for fine-grain water content states (that is, the Atterberg limits), and official Unified Soil Classification System (USCS) designation. In addition, preliminary stratigraphy and geotechnical relations within and between soil units are presented.
NASA Technical Reports Server (NTRS)
Meisch, A. J.
1972-01-01
Data for the system n-pentane/n-heptane on porous Chromosorb-102 adsorbent were obtained at 150, 175, and 200 C for mixtures containing zero to 100% n-pentane by weight. Prior results showing limitations on superposition of pure component data to predict multicomponent chromatograms were verified. The thermodynamic parameter MR0 was found to be a linear function of sample composition. A nonporous adsorbent failed to separate the system because of large input sample dispersions. A proposed automated data processing scheme involving magnetic tape recording of the detector signals and processing by a minicomputer was rejected because of resolution limitations of the available a/d converters. Preliminary data on porosity and pore size distributions of the adsorbents were obtained.
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.
(Sample) Size Matters: Best Practices for Defining Error in Planktic Foraminiferal Proxy Records
NASA Astrophysics Data System (ADS)
Lowery, C.; Fraass, A. J.
2016-02-01
Paleoceanographic research is a vital tool to extend modern observational datasets and to study the impact of climate events for which there is no modern analog. Foraminifera are one of the most widely used tools for this type of work, both as paleoecological indicators and as carriers for geochemical proxies. However, the use of microfossils as proxies for paleoceanographic conditions brings about a unique set of problems. This is primarily due to the fact that groups of individual foraminifera, which usually live about a month, are used to infer average conditions for time periods ranging from hundreds to tens of thousands of years. Because of this, adequate sample size is very important for generating statistically robust datasets, particularly for stable isotopes. In the early days of stable isotope geochemistry, instrumental limitations required hundreds of individual foraminiferal tests to return a value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. While this has many advantages, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB, or 1°C. Here, we demonstrate the use of this tool to quantify error in micropaleontological datasets, and suggest best practices for minimizing error when generating stable isotope data with foraminifera.
Gurian, Elizabeth A
2018-05-01
Research on mass murder is limited due to differences in definitions (particularly with respect to victim count), as well as categorizations based on motive. These limitations restrict our understanding of the offending, adjudication, and outcome patterns of these offenders and can obscure potential underlying similarities to comparable types of offenders (e.g., lone actors or terrorists). To address some of these limitations, this research study, which includes an international sample of 434 cases (455 total offenders), uses descriptive and empirical analyses of solo male, solo female, and partnered mass murderers (teams of two or more) to explore offending, adjudication, and outcome patterns among these different types offenders. While the results from this research study support much previous mass murder research, the findings also emphasize the importance of large international sample sizes, objective categorizations, and the use of empirically based analyses to further advance our understanding of these offenders.
NASA Astrophysics Data System (ADS)
Terada, T.; Sato, M.; Mochizuki, N.; Yamamoto, Y.; Tsunakawa, H.
2013-12-01
Magnetic properties of ferromagnetic minerals generally depend on their chemical composition, crystal structure, size, and shape. In the usual paleomagnetic study, we use a bulk sample which is the assemblage of magnetic minerals showing broad distributions of various magnetic properties. Microscopic and Curie-point observations of the bulk sample enable us to identify the constituent magnetic minerals, while other measurements, for example, stepwise thermal and/or alternating field demagnetizations (ThD, AFD) make it possible to estimate size, shape and domain state of the constituent magnetic grains. However, estimation based on stepwise demagnetizations has a limitation that magnetic grains with the same coercivity Hc (or blocking temperature Tb) can be identified as the single population even though they could have different size and shape. Dunlop and West (1969) carried out mapping of grain size and coercivity (Hc) using pTRM. However, it is considered that their mapping method is basically applicable to natural rocks containing only SD grains, since the grain sizes are estimated on the basis of the single domain theory (Neel, 1949). In addition, it is impossible to check thermal alteration due to laboratory heating in their experiment. In the present study we propose a new experimental method which makes it possible to estimate distribution of size and shape of magnetic minerals in a bulk sample. The present method is composed of simple procedures: (1) imparting ARM to a bulk sample, (2) ThD at a certain temperature, (3) stepwise AFD on the remaining ARM, (4) repeating the steps (1) ~ (3) with ThD at elevating temperatures up to the Curie temperature of the sample. After completion of the whole procedures, ARM spectra are calculated and mapped on the HC-Tb plane (hereafter called HC-Tb diagram). We analyze the Hc-Tb diagrams as follows: (1) For uniaxial SD populations, theoretical curve for a certain grain size (or shape anisotropy) is drawn on the Hc-Tb diagram. The curves are calculated using the single domain theory, since coercivity and blocking temperature of uniaxial SD grains can be expressed as a function of size and shape. (2) Boundary between SD and MD grains are calculated and drawn on the Hc-Tb diagram according to the theory by Butler and Banerjee (1975). (3) Theoretical predictions by (1) and (2) are compared with the obtained ARM spectra to estimate quantitive distribution of size, shape and domain state of magnetic grains in the sample. This mapping method has been applied to three samples: Hawaiian basaltic lava extruded in 1995, Ueno basaltic lava formed during Matsuyama chron, and Oshima basaltic lava extruded in 1986. We will discuss physical states of magnetic grains (size, shape, domain state, etc.) and their possible origins.
Efficient Bayesian mixed model analysis increases association power in large cohorts
Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L
2014-01-01
Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633
Application of a Probalistic Sizing Methodology for Ceramic Structures
NASA Astrophysics Data System (ADS)
Rancurel, Michael; Behar-Lafenetre, Stephanie; Cornillon, Laurence; Leroy, Francois-Henri; Coe, Graham; Laine, Benoit
2012-07-01
Ceramics are increasingly used in the space industry to take advantage of their stability and high specific stiffness properties. Their brittle behaviour often leads to size them by increasing the safety factors that are applied on the maximum stresses. It induces to oversize the structures. This is inconsistent with the major driver in space architecture, the mass criteria. This paper presents a methodology to size ceramic structures based on their failure probability. Thanks to failure tests on samples, the Weibull law which characterizes the strength distribution of the material is obtained. A-value (Q0.0195%) and B-value (Q0.195%) are then assessed to take into account the limited number of samples. A knocked-down Weibull law that interpolates the A- & B- values is also obtained. Thanks to these two laws, a most-likely and a knocked- down prediction of failure probability are computed for complex ceramic structures. The application of this methodology and its validation by test is reported in the paper.
Blood platelet counts, morphology and morphometry in lions, Panthera leo.
Du Plessis, L
2009-09-01
Due to logistical problems in obtaining sufficient blood samples from apparently healthy animals in the wild in order to establish normal haematological reference values, only limited information regarding the blood platelet count and morphology of free-living lions (Panthera leo) is available. This study provides information on platelet counts and describes their morphology with particular reference to size in two normal, healthy and free-ranging lion populations. Blood samples were collected from a total of 16 lions. Platelet counts, determined manually, ranged between 218 and 358 x 10(9)/l. Light microscopy showed mostly activated platelets of various sizes with prominent granules. At the ultrastructural level the platelets revealed typical mammalian platelet morphology. However, morphometric analysis revealed a significant difference (P < 0.001) in platelet size between the two groups of animals. Basic haematological information obtained in this study may be helpful in future comparative studies between animals of the same species as well as in other felids.
Diffusion NMR methods applied to xenon gas for materials study
NASA Technical Reports Server (NTRS)
Mair, R. W.; Rosen, M. S.; Wang, R.; Cory, D. G.; Walsworth, R. L.
2002-01-01
We report initial NMR studies of (i) xenon gas diffusion in model heterogeneous porous media and (ii) continuous flow laser-polarized xenon gas. Both areas utilize the pulsed gradient spin-echo (PGSE) techniques in the gas phase, with the aim of obtaining more sophisticated information than just translational self-diffusion coefficients--a brief overview of this area is provided in the Introduction. The heterogeneous or multiple-length scale model porous media consisted of random packs of mixed glass beads of two different sizes. We focus on observing the approach of the time-dependent gas diffusion coefficient, D(t) (an indicator of mean squared displacement), to the long-time asymptote, with the aim of understanding the long-length scale structural information that may be derived from a heterogeneous porous system. We find that D(t) of imbibed xenon gas at short diffusion times is similar for the mixed bead pack and a pack of the smaller sized beads alone, hence reflecting the pore surface area to volume ratio of the smaller bead sample. The approach of D(t) to the long-time limit follows that of a pack of the larger sized beads alone, although the limiting D(t) for the mixed bead pack is lower, reflecting the lower porosity of the sample compared to that of a pack of mono-sized glass beads. The Pade approximation is used to interpolate D(t) data between the short- and long-time limits. Initial studies of continuous flow laser-polarized xenon gas demonstrate velocity-sensitive imaging of much higher flows than can generally be obtained with liquids (20-200 mm s-1). Gas velocity imaging is, however, found to be limited to a resolution of about 1 mm s-1 owing to the high diffusivity of gases compared with liquids. We also present the first gas-phase NMR scattering, or diffusive-diffraction, data, namely flow-enhanced structural features in the echo attenuation data from laser-polarized xenon flowing through a 2 mm glass bead pack. c2002 John Wiley & Sons, Ltd.
Dental size variation in the Atapuerca-SH Middle Pleistocene hominids.
Bermúdez de Castro, J M; Sarmiento, S; Cunha, E; Rosas, A; Bastir, M
2001-09-01
The Middle Pleistocene Atapuerca-Sima de los Huesos (SH) site in Spain has yielded the largest sample of fossil hominids so far found from a single site and belonging to the same biological population. The SH dental sample includes a total of 452 permanent and deciduous teeth, representing a minimum of 27 individuals. We present a study of the dental size variation in these hominids, based on the analysis of the mandibular permanent dentition: lateral incisors, n=29; canines, n=27; third premolars, n=30; fourth premolars, n=34; first molars, n=38; second molars, n=38. We have obtained the buccolingual diameter and the crown area (measured on occlusal photographs) of these teeth, and used the bootstrap method to assess the amount of variation in the SH sample compared with the variation of a modern human sample from the Museu Antropologico of the Universidade of Coimbra (Portugal). The SH hominids have, in general terms, a dental size variation higher than that of the modern human sample. The analysis is especially conclusive for the canines. Furthermore, we have estimated the degree of sexual dimorphism of the SH sample by obtaining male and female dental subsamples by means of sexing the large sample of SH mandibular specimens. We obtained the index of sexual dimorphism (ISD=male mean/female mean) and the values were compared with those obtained from the sexed modern human sample from Coimbra, and with data found in the literature concerning several recent human populations. In all tooth classes the ISD of the SH hominids was higher than that of modern humans, but the differences were generally modest, except for the canines, thus suggesting that canine size sexual dimorphism in Homo heidelbergensis was probably greater than that of modern humans. Since the approach of sexing fossil specimens has some obvious limitations, these results should be assessed with caution. Additional data from SH and other European Middle Pleistocene sites would be necessary to test this hypothesis. Copyright 2001 Academic Press.
The long range voice coil atomic force microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnard, H.; Randall, C.; Bridges, D.
2012-02-15
Most current atomic force microscopes (AFMs) use piezoelectric ceramics for scan actuation. Piezoelectric ceramics provide precision motion with fast response to applied voltage potential. A drawback to piezoelectric ceramics is their inherently limited ranges. For many samples this is a nonissue, as imaging the nanoscale details is the goal. However, a key advantage of AFM over other microscopy techniques is its ability to image biological samples in aqueous buffer. Many biological specimens have topography for which the range of piezoactuated stages is limiting, a notable example of which is bone. In this article, we present the use of voice coilsmore » in scan actuation for an actuation range in the Z-axis an order of magnitude larger than any AFM commercially available today. The increased scan size will allow for imaging an important new variety of samples, including bone fractures.« less
MOHAMED, Sabrein H.; EL-ANSARY, Aida L.; EL-AZIZ, Eman M. Abd
2017-01-01
Crystalline free silica is considered as a lung carcinogen and the occupational exposure to its dust is a health hazard to workers employed in industries that involve ores of mineral dust. In Egypt, thousands of people work under conditions of silica dust exposure exceeding the occupational exposure limit, as a result the monitoring of this occupational exposure to crystalline silica dust is required by government legislation. The assessment of the later is a multi-phase process, depend on workplace measurements, quantitative analyses of samples, and comparison of results with the permissible limits. This study aims to investigate occupational exposure to crystalline silica dust at 22 factories in Egypt with different industrial activities like stone cutting, glass making, ceramic, and sand blasting. Dust samples were collected from work sites at the breathing zone using a personal sampling pump and a size-selective cyclone and analyzed using FTIR. The sampling period was 60–120 min. The results show that the exposure at each of the industrial sectors is very much higher than the current national and international limits, and that lead to a great risk of lung cancer and mortality to workers. PMID:29199263
Quantifying learning in biotracer studies.
Brown, Christopher J; Brett, Michael T; Adame, Maria Fernanda; Stewart-Koster, Ben; Bunn, Stuart E
2018-04-12
Mixing models have become requisite tools for analyzing biotracer data, most commonly stable isotope ratios, to infer dietary contributions of multiple sources to a consumer. However, Bayesian mixing models will always return a result that defaults to their priors if the data poorly resolve the source contributions, and thus, their interpretation requires caution. We describe an application of information theory to quantify how much has been learned about a consumer's diet from new biotracer data. We apply the approach to two example data sets. We find that variation in the isotope ratios of sources limits the precision of estimates for the consumer's diet, even with a large number of consumer samples. Thus, the approach which we describe is a type of power analysis that uses a priori simulations to find an optimal sample size. Biotracer data are fundamentally limited in their ability to discriminate consumer diets. We suggest that other types of data, such as gut content analysis, must be used as prior information in model fitting, to improve model learning about the consumer's diet. Information theory may also be used to identify optimal sampling protocols in situations where sampling of consumers is limited due to expense or ethical concerns.
Rast, Philippe; Hofer, Scott M.
2014-01-01
We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544
Handling limited datasets with neural networks in medical applications: A small-data approach.
Shaikhina, Torgyn; Khovanova, Natalia A
2017-01-01
Single-centre studies in medical domain are often characterised by limited samples due to the complexity and high costs of patient data collection. Machine learning methods for regression modelling of small datasets (less than 10 observations per predictor variable) remain scarce. Our work bridges this gap by developing a novel framework for application of artificial neural networks (NNs) for regression tasks involving small medical datasets. In order to address the sporadic fluctuations and validation issues that appear in regression NNs trained on small datasets, the method of multiple runs and surrogate data analysis were proposed in this work. The approach was compared to the state-of-the-art ensemble NNs; the effect of dataset size on NN performance was also investigated. The proposed framework was applied for the prediction of compressive strength (CS) of femoral trabecular bone in patients suffering from severe osteoarthritis. The NN model was able to estimate the CS of osteoarthritic trabecular bone from its structural and biological properties with a standard error of 0.85MPa. When evaluated on independent test samples, the NN achieved accuracy of 98.3%, outperforming an ensemble NN model by 11%. We reproduce this result on CS data of another porous solid (concrete) and demonstrate that the proposed framework allows for an NN modelled with as few as 56 samples to generalise on 300 independent test samples with 86.5% accuracy, which is comparable to the performance of an NN developed with 18 times larger dataset (1030 samples). The significance of this work is two-fold: the practical application allows for non-destructive prediction of bone fracture risk, while the novel methodology extends beyond the task considered in this study and provides a general framework for application of regression NNs to medical problems characterised by limited dataset sizes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Adaptability of laser diffraction measurement technique in soil physics methodology
NASA Astrophysics Data System (ADS)
Barna, Gyöngyi; Szabó, József; Rajkai, Kálmán; Bakacsi, Zsófia; Koós, Sándor; László, Péter; Hauk, Gabriella; Makó, András
2016-04-01
There are intentions all around the world to harmonize soils' particle size distribution (PSD) data by the laser diffractometer measurements (LDM) to that of the sedimentation techniques (pipette or hydrometer methods). Unfortunately, up to the applied methodology (e. g. type of pre-treatments, kind of dispersant etc.), PSDs of the sedimentation methods (due to different standards) are dissimilar and could be hardly harmonized with each other, as well. A need was arisen therefore to build up a database, containing PSD values measured by the pipette method according to the Hungarian standard (MSZ-08. 0205: 1978) and the LDM according to a widespread and widely used procedure. In our current publication the first results of statistical analysis of the new and growing PSD database are presented: 204 soil samples measured with pipette method and LDM (Malvern Mastersizer 2000, HydroG dispersion unit) were compared. Applying usual size limits at the LDM, clay fraction was highly under- and silt fraction was overestimated compared to the pipette method. Subsequently soil texture classes determined from the LDM measurements significantly differ from results of the pipette method. According to previous surveys and relating to each other the two dataset to optimizing, the clay/silt boundary at LDM was changed. Comparing the results of PSDs by pipette method to that of the LDM, in case of clay and silt fractions the modified size limits gave higher similarities. Extension of upper size limit of clay fraction from 0.002 to 0.0066 mm, and so change the lower size limit of silt fractions causes more easy comparability of pipette method and LDM. Higher correlations were found between clay content and water vapor adsorption, specific surface area in case of modified limit, as well. Texture classes were also found less dissimilar. The difference between the results of the two kind of PSD measurement methods could be further reduced knowing other routinely analyzed soil parameters (e.g. pH(H2O), organic carbon and calcium carbonate content).
A Review and Annotated Bibliography of Armor Gunnery Training Device Effectiveness Literature
1993-11-01
training effectiveness (skill acquisition, skill reten-tion, performance prediction, transfer of training) and (b) research limitations (sample size...standalone, tank-appended, subcaliber, and laser) and four areas of training effectiveness (skill acquisition, skill retention, performance prediction, and...standalone, tank-appended, subcaliber, laser) and areas of training effectiveness (skill acquisition, skill retention, performance prediction, transfer of
Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys
Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik
2011-01-01
The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...
Writing for Learning in Science: A Secondary Analysis of Six Studies
ERIC Educational Resources Information Center
Gunel, Murat; Hand, Brian; Prain, Vaughan
2007-01-01
This study is a secondary analysis of six previous studies that formed part of an ongoing research program focused on examining the benefits of using writing-to-learn strategies within science classrooms. The study is an attempt to make broader generalizations than those based on individual studies, given limitations related to sample sizes,…
Impacts of elevated atmospheric CO2 on nutrient content and yield of important food crops
USDA-ARS?s Scientific Manuscript database
One of the many ways that climate change may affect human health is by altering the nutrient content of food crops. However, previous attempts to study the effects of increased atmospheric CO2 on crop nutrition have been limited by small sample sizes and/or artificial growing conditions. Here we p...
Classifier performance prediction for computer-aided diagnosis using a limited dataset.
Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir
2008-04-01
In a practical classifier design problem, the true population is generally unknown and the available sample is finite-sized. A common approach is to use a resampling technique to estimate the performance of the classifier that will be trained with the available sample. We conducted a Monte Carlo simulation study to compare the ability of the different resampling techniques in training the classifier and predicting its performance under the constraint of a finite-sized sample. The true population for the two classes was assumed to be multivariate normal distributions with known covariance matrices. Finite sets of sample vectors were drawn from the population. The true performance of the classifier is defined as the area under the receiver operating characteristic curve (AUC) when the classifier designed with the specific sample is applied to the true population. We investigated methods based on the Fukunaga-Hayes and the leave-one-out techniques, as well as three different types of bootstrap methods, namely, the ordinary, 0.632, and 0.632+ bootstrap. The Fisher's linear discriminant analysis was used as the classifier. The dimensionality of the feature space was varied from 3 to 15. The sample size n2 from the positive class was varied between 25 and 60, while the number of cases from the negative class was either equal to n2 or 3n2. Each experiment was performed with an independent dataset randomly drawn from the true population. Using a total of 1000 experiments for each simulation condition, we compared the bias, the variance, and the root-mean-squared error (RMSE) of the AUC estimated using the different resampling techniques relative to the true AUC (obtained from training on a finite dataset and testing on the population). Our results indicated that, under the study conditions, there can be a large difference in the RMSE obtained using different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Under this type of conditions, the 0.632 and 0.632+ bootstrap methods have the lowest RMSE, indicating that the difference between the estimated and the true performances obtained using the 0.632 and 0.632+ bootstrap will be statistically smaller than those obtained using the other three resampling methods. Of the three bootstrap methods, the 0.632+ bootstrap provides the lowest bias. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited dataset.
Teleosts as Model Organisms To Understand Host-Microbe Interactions.
Lescak, Emily A; Milligan-Myhre, Kathryn C
2017-08-01
Host-microbe interactions are influenced by complex host genetics and environment. Studies across animal taxa have aided our understanding of how intestinal microbiota influence vertebrate development, disease, and physiology. However, traditional mammalian studies can be limited by the use of isogenic strains, husbandry constraints that result in small sample sizes and limited statistical power, reliance on indirect characterization of gut microbial communities from fecal samples, and concerns of whether observations in artificial conditions are actually reflective of what occurs in the wild. Fish models are able to overcome many of these limitations. The extensive variation in the physiology, ecology, and natural history of fish enriches studies of the evolution and ecology of host-microbe interactions. They share physiological and immunological features common among vertebrates, including humans, and harbor complex gut microbiota, which allows identification of the mechanisms driving microbial community assembly. Their accelerated life cycles and large clutch sizes and the ease of sampling both internal and external microbial communities make them particularly well suited for robust statistical studies of microbial diversity. Gnotobiotic techniques, genetic manipulation of the microbiota and host, and transparent juveniles enable novel insights into mechanisms underlying development of the digestive tract and disease states. Many diseases involve a complex combination of genes which are difficult to manipulate in homogeneous model organisms. By taking advantage of the natural genetic variation found in wild fish populations, as well as of the availability of powerful genetic tools, future studies should be able to identify conserved genes and pathways that contribute to human genetic diseases characterized by dysbiosis. Copyright © 2017 Lescak and Milligan-Myhre.
Teleosts as Model Organisms To Understand Host-Microbe Interactions
2017-01-01
ABSTRACT Host-microbe interactions are influenced by complex host genetics and environment. Studies across animal taxa have aided our understanding of how intestinal microbiota influence vertebrate development, disease, and physiology. However, traditional mammalian studies can be limited by the use of isogenic strains, husbandry constraints that result in small sample sizes and limited statistical power, reliance on indirect characterization of gut microbial communities from fecal samples, and concerns of whether observations in artificial conditions are actually reflective of what occurs in the wild. Fish models are able to overcome many of these limitations. The extensive variation in the physiology, ecology, and natural history of fish enriches studies of the evolution and ecology of host-microbe interactions. They share physiological and immunological features common among vertebrates, including humans, and harbor complex gut microbiota, which allows identification of the mechanisms driving microbial community assembly. Their accelerated life cycles and large clutch sizes and the ease of sampling both internal and external microbial communities make them particularly well suited for robust statistical studies of microbial diversity. Gnotobiotic techniques, genetic manipulation of the microbiota and host, and transparent juveniles enable novel insights into mechanisms underlying development of the digestive tract and disease states. Many diseases involve a complex combination of genes which are difficult to manipulate in homogeneous model organisms. By taking advantage of the natural genetic variation found in wild fish populations, as well as of the availability of powerful genetic tools, future studies should be able to identify conserved genes and pathways that contribute to human genetic diseases characterized by dysbiosis. PMID:28439034
Cratering in glasses impacted by debris or micrometeorites
NASA Technical Reports Server (NTRS)
Wiedlocher, David E.; Kinser, Donald L.
1993-01-01
Mechanical strength measurements on five glasses and one glass-ceramic exposed on LDEF revealed no damage exceeding experimental limits of error. The measurement technique subjected less than 5 percent of the sample surface area to stresses above 90 percent of the failure strength. Seven micrometeorite or space debris impacts occurred at locations which were not in that portion of the sample subjected to greater than 90 percent of the applied stress. As a result of this, the impact events on the sample were not detected in the mechanical strength measurements. The physical form and structure of the impact sites was carefully examined to determine the influence of those events upon stress concentration associated with the impact and the resulting mechanical strength. The size of the impact site, insofar as it determines flaw size for fracture purposes, was examined. Surface topography of the impacts reveals that six of the seven sites display impact melting. The classical melt crater structure is surrounded by a zone of fractured glass. Residual stresses arising from shock compression and from cooling of the fused zone cannot be included in the fracture mechanics analyses based on simple flaw size measurements. Strategies for refining estimates of mechanical strength degradation by impact events are presented.
The use of mini-samples in palaeomagnetism
NASA Astrophysics Data System (ADS)
Böhnel, Harald; Michalk, Daniel; Nowaczyk, Norbert; Naranjo, Gildardo Gonzalez
2009-10-01
Rock cores of ~25 mm diameter are widely used in palaeomagnetism. Occasionally smaller diameters have been used as well which represents distinct advantages in terms of throughput, weight of equipment and core collections. How their orientation precision compares to 25 mm cores, however, has not been evaluated in detail before. Here we compare the site mean directions and their statistical parameters for 12 lava flows sampled with 25 mm cores (standard samples, typically 8 cores per site) and with 12 mm drill cores (mini-samples, typically 14 cores per site). The site-mean directions for both sample sizes appear to be indistinguishable in most cases. For the mini-samples, site dispersion parameters k on average are slightly lower than for the standard samples reflecting their larger orienting and measurement errors. Applying the Wilcoxon signed-rank test the probability that k or α95 have the same distribution for both sizes is acceptable only at the 17.4 or 66.3 per cent level, respectively. The larger mini-core numbers per site appears to outweigh the lower k values yielding also slightly smaller confidence limits α95. Further, both k and α95 are less variable for mini-samples than for standard size samples. This is interpreted also to result from the larger number of mini-samples per site, which better averages out the detrimental effect of undetected abnormal remanence directions. Sampling of volcanic rocks with mini-samples therefore does not present a disadvantage in terms of the overall obtainable uncertainty of site mean directions. Apart from this, mini-samples do present clear advantages during the field work, as about twice the number of drill cores can be recovered compared to 25 mm cores, and the sampled rock unit is then more widely covered, which reduces the contribution of natural random errors produced, for example, by fractures, cooling joints, and palaeofield inhomogeneities. Mini-samples may be processed faster in the laboratory, which is of particular advantage when carrying out palaeointensity experiments.
de Silva Souza, Cristiano; Block, Jane Mara
2018-02-01
The effect of the partial replacement of cocoa butter (CB) by cocoa butter equivalent (CBE) in the release of volatile compounds in dark chocolate was studied. The fatty acid profile, triacylglyceride composition, solid fat content (SFC) and melting point were determined in CB and CBE. Chocolate with CB (F1) and with different content of CBE (5 and 10%-F2 and F3, respectively) were prepared. Plastic viscosity and Casson flow limit, particle size distribution and release of volatile compounds using a solid phase microextraction with gas chromatography (SMPE-GC) were determined in the chocolate samples. The melting point was similar for the studied samples but SFC indicated different melting behavior. CBE showed a higher saturated fatty acid content when compared to CB. The samples showed similar SOS triglyceride content (21 and 23.7% for CB and CBE, respectively). Higher levels of POS and lower POP were observed for CB when compared to CBE (44.8 and 19.7 and 19 and 41.1%, respectively). The flow limit and plastic viscosity were similar for the studied chocolates samples, as well as the particle size distribution. Among the 27 volatile compounds identified in the samples studied, 12 were detected in significantly higher concentrations in sample F1 (phenylacetaldehyde, methylpyrazine, 2,6-dimethylpyrazine, 2-ethyl-5-methylpyrazine, 2-ethyl-3,5-dimethylpyrazine, tetramethylpyrazine, trimethylpyrazine, 3-ethyl-2,5-dimethylpyrazine, phenethyl alcohol, 2-acetylpyrrole, acetophenone and isovaleric acid). The highest changes were observed in the pyrazines group, which presented a decrease of more than half in the formulations where part of the CB was replaced by the CBE.
Brown, Gary S; Betty, Rita G; Brockmann, John E; Lucero, Daniel A; Souza, Caroline A; Walsh, Kathryn S; Boucher, Raymond M; Tezak, Matthew S; Wilson, Mollye C
2007-07-01
Vacuum filter socks were evaluated for recovery efficiency of powdered Bacillus atrophaeus spores from two non-porous surfaces, stainless steel and painted wallboard and two porous surfaces, carpet and bare concrete. Two surface coupons were positioned side-by-side and seeded with aerosolized Bacillus atrophaeus spores. One of the surfaces, a stainless steel reference coupon, was sized to fit into a sample vial for direct spore removal, while the other surface, a sample surface coupon, was sized for a vacuum collection application. Deposited spore material was directly removed from the reference coupon surface and cultured for enumeration of colony forming units (CFU), while deposited spore material was collected from the sample coupon using the vacuum filter sock method, extracted by sonication and cultured for enumeration. Recovery efficiency, which is a measure of overall transfer effectiveness from the surface to culture, was calculated as the number of CFU enumerated from the filter sock sample per unit area relative to the number of CFU enumerated from the co-located reference coupon per unit area. The observed mean filter sock recovery efficiency from stainless steel was 0.29 (SD = 0.14, n = 36), from painted wallboard was 0.25 (SD = 0.15, n = 36), from carpet was 0.28 (SD = 0.13, n = 40) and from bare concrete was 0.19 (SD = 0.14, n = 44). Vacuum filter sock recovery quantitative limits of detection were estimated at 105 CFU m(-2) from stainless steel and carpet, 120 CFU m(-2) from painted wallboard and 160 CFU m(-2) from bare concrete. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling for biological agents such as Bacillus anthracis.
Assessment of increased sampling pump flow rates in a disposable, inhalable aerosol sampler
Stewart, Justin; Sleeth, Darrah K.; Handy, Rod G.; Pahler, Leon F.; Anthony, T. Renee; Volckens, John
2017-01-01
A newly designed, low-cost, disposable inhalable aerosol sampler was developed to assess workers personal exposure to inhalable particles. This sampler was originally designed to operate at 10 L/min to increase sample mass and, therefore, improve analytical detection limits for filter-based methods. Computational fluid dynamics modeling revealed that sampler performance (relative to aerosol inhalability criteria) would not differ substantially at sampler flows of 2 and 10 L/min. With this in mind, the newly designed inhalable aerosol sampler was tested in a wind tunnel, simultaneously, at flows of 2 and 10 L/min flow. A mannequin was equipped with 6 sampler/pump assemblies (three pumps operated at 2 L/min and three pumps at 10 L/min) inside a wind tunnel, operated at 0.2 m/s, which has been shown to be a typical indoor workplace wind speed. In separate tests, four different particle sizes were injected to determine if the sampler’s performance with the new 10 L/min flow rate significantly differed to that at 2 L/min. A comparison between inhalable mass concentrations using a Wilcoxon signed rank test found no significant difference in the concentration of particles sampled at 10 and 2 L/min for all particle sizes tested. Our results suggest that this new aerosol sampler is a versatile tool that can improve exposure assessment capabilities for the practicing industrial hygienist by improving the limit of detection and allowing for shorting sampling times. PMID:27676440
Assessment of increased sampling pump flow rates in a disposable, inhalable aerosol sampler.
Stewart, Justin; Sleeth, Darrah K; Handy, Rod G; Pahler, Leon F; Anthony, T Renee; Volckens, John
2017-03-01
A newly designed, low-cost, disposable inhalable aerosol sampler was developed to assess workers personal exposure to inhalable particles. This sampler was originally designed to operate at 10 L/min to increase sample mass and, therefore, improve analytical detection limits for filter-based methods. Computational fluid dynamics modeling revealed that sampler performance (relative to aerosol inhalability criteria) would not differ substantially at sampler flows of 2 and 10 L/min. With this in mind, the newly designed inhalable aerosol sampler was tested in a wind tunnel, simultaneously, at flows of 2 and 10 L/min flow. A mannequin was equipped with 6 sampler/pump assemblies (three pumps operated at 2 L/min and three pumps at 10 L/min) inside a wind tunnel, operated at 0.2 m/s, which has been shown to be a typical indoor workplace wind speed. In separate tests, four different particle sizes were injected to determine if the sampler's performance with the new 10 L/min flow rate significantly differed to that at 2 L/min. A comparison between inhalable mass concentrations using a Wilcoxon signed rank test found no significant difference in the concentration of particles sampled at 10 and 2 L/min for all particle sizes tested. Our results suggest that this new aerosol sampler is a versatile tool that can improve exposure assessment capabilities for the practicing industrial hygienist by improving the limit of detection and allowing for shorting sampling times.
2013-01-01
Introduction Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine. Methods Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation. Results A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. Conclusions Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials. PMID:23302257
Association Between Smoking and Size of Anal Warts in HIV-infected Women
Luu, HN; Amirian, ES; Beasley, RP; Piller, L; Chan, W; Scheurer, ME
2015-01-01
While the association between smoking and HPV infection, cervical cancer, and anal cancer has been well studied, evidence on the association between cigarette smoking and anal warts is limited. The purpose of this study was to investigate if cigarette smoking status influences the size of anal warts over time in HIV-infected women in a sample of 976 HIV-infected women from the Women’s Interagency HIV Study (WIHS). A linear mixed model was used to determine the effect of smoking on anal wart size. Even though women who were currently smokers had larger anal warts at baseline and slower growth rate of anal wart size after each visit than women who were not current smokers, there was no association between size of anal wart and current smoking status over time. Further studies on the role of smoking and interaction between smoking and other risk factors, however, should be explored. PMID:23155099
Hoffmann, William D.; Kertesz, Vilmos; Srijanto, Bernadeta R.; ...
2017-02-20
The use of atomic force microscopy controlled nano-thermal analysis probes for reproducible spatially resolved thermally-assisted sampling of micrometer-sized areas (ca. 11 m 17 m wide 2.4 m deep) from relatively low number average molecular weight (M n < 3000) polydisperse thin films of poly(2-vinylpyridine) (P2VP) is presented. Following sampling, the nano-thermal analysis probes were moved up from the surface and the probe temperature ramped to liberate the sampled materials into the gas phase for atmospheric pressure chemical ionization and mass spectrometric analysis. Furthermore, the procedure and mechanism for material pickup, the sampling reproducibility and sampling size are discussed and themore » oligomer distribution information available from slow temperature ramps versus ballistic temperature jumps is presented. For the M n = 970 P2VP, the Mn and polydispersity index determined from the mass spectrometric data were in line with both the label values from the sample supplier and the value calculated from the simple infusion of a solution of polymer into the commercial atmospheric pressure chemical ionization source on this mass spectrometer. With a P2VP sample of higher Mn (M n = 2070 and 2970), intact oligomers were still observed (as high as m/z 2793 corresponding to the 26-mer), but a significant abundance of thermolysis products were also observed. In addition, the capability for confident identification of the individual oligomers by slowly ramping the probe temperature and collecting data dependent tandem mass spectra was also demonstrated. We also discuss the material type limits to the current sampling and analysis approach as well as possible improvements in nano-thermal analysis probe design to enable smaller area sampling and to enable controlled temperature ramps beyond the present upper limit of about 415°C.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, William D.; Kertesz, Vilmos; Srijanto, Bernadeta R.
The use of atomic force microscopy controlled nano-thermal analysis probes for reproducible spatially resolved thermally-assisted sampling of micrometer-sized areas (ca. 11 m 17 m wide 2.4 m deep) from relatively low number average molecular weight (M n < 3000) polydisperse thin films of poly(2-vinylpyridine) (P2VP) is presented. Following sampling, the nano-thermal analysis probes were moved up from the surface and the probe temperature ramped to liberate the sampled materials into the gas phase for atmospheric pressure chemical ionization and mass spectrometric analysis. Furthermore, the procedure and mechanism for material pickup, the sampling reproducibility and sampling size are discussed and themore » oligomer distribution information available from slow temperature ramps versus ballistic temperature jumps is presented. For the M n = 970 P2VP, the Mn and polydispersity index determined from the mass spectrometric data were in line with both the label values from the sample supplier and the value calculated from the simple infusion of a solution of polymer into the commercial atmospheric pressure chemical ionization source on this mass spectrometer. With a P2VP sample of higher Mn (M n = 2070 and 2970), intact oligomers were still observed (as high as m/z 2793 corresponding to the 26-mer), but a significant abundance of thermolysis products were also observed. In addition, the capability for confident identification of the individual oligomers by slowly ramping the probe temperature and collecting data dependent tandem mass spectra was also demonstrated. We also discuss the material type limits to the current sampling and analysis approach as well as possible improvements in nano-thermal analysis probe design to enable smaller area sampling and to enable controlled temperature ramps beyond the present upper limit of about 415°C.« less
Effect size and statistical power in the rodent fear conditioning literature - A systematic review.
Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.
Effect size and statistical power in the rodent fear conditioning literature – A systematic review
Macleod, Malcolm R.
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, S.; Aldering, G.; Antilogus, P.
The use of Type Ia supernovae as distance indicators led to the discovery of the accelerating expansion of the universe a decade ago. Now that large second generation surveys have significantly increased the size and quality of the high-redshift sample, the cosmological constraints are limited by the currently available sample of ~50 cosmologically useful nearby supernovae. The Nearby Supernova Factory addresses this problem by discovering nearby supernovae and observing their spectrophotometric time development. Our data sample includes over 2400 spectra from spectral timeseries of 185 supernovae. This talk presents results from a portion of this sample including a Hubble diagrammore » (relative distance vs. redshift) and a description of some analyses using this rich dataset.« less
NASA Astrophysics Data System (ADS)
Lu, Xinguo; Chen, Dan
2017-08-01
Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.
Companions in Color: High-Resolution Imaging of Kepler’s Sub-Neptune Host Stars
NASA Astrophysics Data System (ADS)
Ware, Austin; Wolfgang, Angie; Kannan, Deepti
2018-01-01
A current problem in astronomy is determining how sub-Neptune-sized exoplanets form in planetary systems. These kinds of planets, which fall between 1 and 4 times the size of Earth, were discovered in abundance by the Kepler Mission and were typically found with relatively short orbital periods. The combination of their size and orbital period make them unusual in relation to the Solar System, leading to the question of how these exoplanets form and evolve. One possibility is that they have been influenced by distant stellar companions. To help assess the influence of these objects on the present-day, observed properties of exoplanets, we conduct a NIR search for visual stellar companions to the stars around which the Kepler Mission discovered planets. We use high-resolution images obtained with the adaptive optics systems at the Lick Observatory Shane-3m telescope to find these companion stars. Importantly, we also determine the effective brightness and distance from the planet-hosting star at which it is possible to detect these companions. Out of the 200 KOIs in our sample, 42 KOIs (21%) have visual companions within 3”, and 90 (46%) have them within 6”. These findings are consistent with recent high-resolution imaging from Furlan et al. 2017 that found at least one visual companion within 4” for 31% of sampled KOIs (37% within 4" for our sample). Our results are also complementary to Furlan et al. 2017, with only 17 visual companions commonly detected in the same filter. As for detection limits, our preliminary results indicate that we can detect companion stars < 3-5 magnitudes fainter than the planet-hosting star at a separation of ~ 1”. These detection limits will enable us to determine the probability that possible companion stars could be hidden within the noise around the planet-hosting star, an important step in determining the frequency with which these short-period, sub-Neptune-sized planets occur within binary star systems.
Carbon Nanotube and Nanofiber Exposure Assessments: An Analysis of 14 Site Visits.
Dahm, Matthew M; Schubauer-Berigan, Mary K; Evans, Douglas E; Birch, M Eileen; Fernback, Joseph E; Deddens, James A
2015-07-01
Recent evidence has suggested the potential for wide-ranging health effects that could result from exposure to carbon nanotubes (CNT) and carbon nanofibers (CNF). In response, the National Institute for Occupational Safety and Health (NIOSH) set a recommended exposure limit (REL) for CNT and CNF: 1 µg m(-3) as an 8-h time weighted average (TWA) of elemental carbon (EC) for the respirable size fraction. The purpose of this study was to conduct an industrywide exposure assessment among US CNT and CNF manufacturers and users. Fourteen total sites were visited to assess exposures to CNT (13 sites) and CNF (1 site). Personal breathing zone (PBZ) and area samples were collected for both the inhalable and respirable mass concentration of EC, using NIOSH Method 5040. Inhalable PBZ samples were collected at nine sites while at the remaining five sites both respirable and inhalable PBZ samples were collected side-by-side. Transmission electron microscopy (TEM) PBZ and area samples were also collected at the inhalable size fraction and analyzed to quantify and size CNT and CNF agglomerate and fibrous exposures. Respirable EC PBZ concentrations ranged from 0.02 to 2.94 µg m(-3) with a geometric mean (GM) of 0.34 µg m(-3) and an 8-h TWA of 0.16 µg m(-3). PBZ samples at the inhalable size fraction for EC ranged from 0.01 to 79.57 µg m(-3) with a GM of 1.21 µg m(-3). PBZ samples analyzed by TEM showed concentrations ranging from 0.0001 to 1.613 CNT or CNF-structures per cm(3) with a GM of 0.008 and an 8-h TWA concentration of 0.003. The most common CNT structure sizes were found to be larger agglomerates in the 2-5 µm range as well as agglomerates >5 µm. A statistically significant correlation was observed between the inhalable samples for the mass of EC and structure counts by TEM (Spearman ρ = 0.39, P < 0.0001). Overall, EC PBZ and area TWA samples were below the NIOSH REL (96% were <1 μg m(-3) at the respirable size fraction), while 30% of the inhalable PBZ EC samples were found to be >1 μg m(-3). Until more information is known about health effects associated with larger agglomerates, it seems prudent to assess worker exposure to airborne CNT and CNF materials by monitoring EC at both the respirable and inhalable size fractions. Concurrent TEM samples should be collected to confirm the presence of CNT and CNF. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2015.
Extraction of hydrocarbons from high-maturity Marcellus Shale using supercritical carbon dioxide
Jarboe, Palma B.; Philip A. Candela,; Wenlu Zhu,; Alan J. Kaufman,
2015-01-01
Shale is now commonly exploited as a hydrocarbon resource. Due to the high degree of geochemical and petrophysical heterogeneity both between shale reservoirs and within a single reservoir, there is a growing need to find more efficient methods of extracting petroleum compounds (crude oil, natural gas, bitumen) from potential source rocks. In this study, supercritical carbon dioxide (CO2) was used to extract n-aliphatic hydrocarbons from ground samples of Marcellus shale. Samples were collected from vertically drilled wells in central and western Pennsylvania, USA, with total organic carbon (TOC) content ranging from 1.5 to 6.2 wt %. Extraction temperature and pressure conditions (80 °C and 21.7 MPa, respectively) were chosen to represent approximate in situ reservoir conditions at sample depth (1920−2280 m). Hydrocarbon yield was evaluated as a function of sample matrix particle size (sieve size) over the following size ranges: 1000−500 μm, 250−125 μm, and 63−25 μm. Several methods of shale characterization including Rock-Eval II pyrolysis, organic petrography, Brunauer−Emmett−Teller surface area, and X-ray diffraction analyses were also performed to better understand potential controls on extraction yields. Despite high sample thermal maturity, results show that supercritical CO2 can liberate diesel-range (n-C11 through n-C21) n-aliphatic hydrocarbons. The total quantity of extracted, resolvable n-aliphatic hydrocarbons ranges from approximately 0.3 to 12 mg of hydrocarbon per gram of TOC. Sieve size does have an effect on extraction yield, with highest recovery from the 250−125 μm size fraction. However, the significance of this effect is limited, likely due to the low size ranges of the extracted shale particles. Additional trends in hydrocarbon yield are observed among all samples, regardless of sieve size: 1) yield increases as a function of specific surface area (r2 = 0.78); and 2) both yield and surface area increase with increasing TOC content (r2 = 0.97 and 0.86, respectively). Given that supercritical CO2 is able to mobilize residual organic matter present in overmature shales, this study contributes to a better understanding of the extent and potential factors affecting the extraction process.
NASA Astrophysics Data System (ADS)
Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.
2017-10-01
The Near-Earth Asteroid (NEA) population ranges in size from a few meters to more than 10 kilometers. NEAs have a wide variety of taxonomic classes, surface features, and shapes, including spheroids, binary objects, contact binaries, elongated, as well as irregular bodies. Using the Arecibo Observatory planetary radar system, we have measured apparent rotation rate, radar reflectivity, apparent diameter, and radar albedos for over 350 NEAs. The radar albedo is defined as the radar cross-section divided by the geometric cross-section. If a shape model is available, the actual cross-section is known at the time of the observation. Otherwise we derive a geometric cross-section from a measured diameter. When radar imaging is available, the diameter was measured from the apparent range depth. However, when radar imaging was not available, we used the continuous wave (CW) bandwidth radar measurements in conjunction with the period of the object. The CW bandwidth provides apparent rotation rate, which, given an independent rotation measurement, such as from lightcurves, constrains the size of the object. We assumed an equatorial view unless we knew the pole orientation, which gives a lower limit on the diameter. The CW also provides the polarization ratio, which is the ratio of the SC and OC cross-sections.We confirm the trend found by Benner et al. (2008) that taxonomic types E and V have very high polarization ratios. We have obtained a larger sample and can analyze additional trends with spin, size, rotation rate, taxonomic class, polarization ratio, and radar albedo to interpret the origin of the NEAs and their dynamical processes. The distribution of radar albedo and polarization ratio at the smallest diameters (≤50 m) differs from the distribution of larger objects (>50 m), although the sample size is limited. Additionally, we find more moderate radar albedos for the smallest NEAs when compared to those with diameters 50-150 m. We will present additional trends we find in this data set.
C-Sphere Strength-Size Scaling in a Bearing-Grade Silicon Nitride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wereszczak, Andrew A; Jadaan, Osama M.; Kirkland, Timothy Philip
2008-01-01
A C-sphere specimen geometry was used to determine the failure strength distributions of a commercially available bearing-grade silicon nitride (Si3N4) having ball diameters of 12.7 and 25.4 mm. Strengths for both diameters were determined using the combination of failure load, C sphere geometry, and finite element analysis and fitted using two-parameter Weibull distributions. Effective areas of both diameters were estimated as a function of Weibull modulus and used to explore whether the strength distributions predictably strength-scaled between each size. They did not. That statistical observation suggested that the same flaw type did not limit the strength of both ball diametersmore » indicating a lack of material homogeneity between the two sizes. Optical fractography confirmed that. It showed there were two distinct strength-limiting flaw types in both ball diameters, that one flaw type was always associated with lower strength specimens, and that significantly higher fraction of the 24.5-mm-diameter c-sphere specimens failed from it. Predictable strength-size-scaling would therefore not result as a consequence of this because these flaw types were not homogenously distributed and sampled in both c-sphere geometries.« less
Using large volume samplers for the monitoring of particle bound micro pollutants in rivers
NASA Astrophysics Data System (ADS)
Kittlaus, Steffen; Fuchs, Stephan
2015-04-01
The requirements of the WFD as well as substance emission modelling at the river basin scale require stable monitoring data for micro pollutants. The monitoring concepts applied by the local authorities as well as by many scientists use single sampling techniques. Samples from water bodies are usually taken in volumes of about one litre and depending on predetermined time steps or through discharge thresholds. For predominantly particle bound micro pollutants the small sample size of about one litre results in a very small amount of suspended particles. To measure micro pollutant concentrations in these samples is demanding and results in a high uncertainty of the measured concentrations, if the concentration is above the detection limit in the first place. In many monitoring programs most of the measured values were below the detection limit. This results in a high uncertainty if river loads were calculated from these data sets. The authors propose a different approach to gain stable concentration values for particle bound micro pollutants from river monitoring: A mixed sample of about 1000 L was pumped in a tank with a dirty-water pump. The sampling usually is done discharge dependant by using a gauge signal as input for the control unit. After the discharge event is over or the tank is fully filled, the suspended solids settle in the tank for 2 days. After this time a clear separation of water and solids can be shown. A sample (1 L) from the water phase and the total mass of the settled solids (about 10 L) are taken to the laboratory for analysis. While the micro pollutants can't hardly be detected in the water phase, the signal from the sediment is high above the detection limit, thus certain and very stable. From the pollutant concentration in the solid phase and the total tank volume the initial pollutant concentration in the sample can be calculated. If the concentration in the water phase is detectable, it can be used to correct the total load. This relatively low cost approach (less costs for analysis because of small sample number) allows to quantify the pollutant load, to derive dissolved-solid partition coefficients and to quantify the pollutant load in different particle size classes.
A cryogenic tensile testing apparatus for micro-samples cooled by miniature pulse tube cryocooler
NASA Astrophysics Data System (ADS)
Chen, L. B.; Liu, S. X.; Gu, K. X.; Zhou, Y.; Wang, J. J.
2015-12-01
This paper introduces a cryogenic tensile testing apparatus for micro-samples cooled by a miniature pulse tube cryocooler. At present, tensile tests are widely applied to measure the mechanical properties of materials; most of the cryogenic tensile testing apparatus are designed for samples with standard sizes, while for non-standard size samples, especially for microsamples, the tensile testing cannot be conducted. The general approach to cool down the specimens for tensile testing is by using of liquid nitrogen or liquid helium, which is not convenient: it is difficult to keep the temperature of the specimens at an arbitrary set point precisely, besides, in some occasions, liquid nitrogen, especially liquid helium, is not easily available. To overcome these limitations, a cryogenic tensile testing apparatus cooled by a high frequency pulse tube cryocooler has been designed, built and tested. The operating temperatures of the developed tensile testing apparatus cover from 20 K to room temperature with a controlling precision of ±10 mK. The apparatus configurations, the methods of operation and some cooling performance will be described in this paper.
Rees, T.F.; Leenheer, J.A.; Ranville, J.F.
1991-01-01
Sediment-recovery efficiency of 86-91% is comparable to that of other types of CFC units. The recovery efficiency is limited by the particle-size distribution of the feed water and by the limiting particle diameter that is retained in the centrifuge bowl. Contamination by trace metals and organics is minimized by coating all surfaces that come in contact with the sample with either FEP or PFA Teflon and using a removable FEP Teflon liner in the centrifuge bowl. -from Authors
Spatial super-resolution of colored images by micro mirrors
NASA Astrophysics Data System (ADS)
Dahan, Daniel; Yaacobi, Ami; Pinsky, Ephraim; Zalevsky, Zeev
2018-06-01
In this paper, we present two methods of dealing with the geometric resolution limit of color imaging sensors. It is possible to overcome the pixel size limit by adding a digital micro-mirror device component on the intermediate image plane of an optical system, and adapting its pattern in a computerized manner before sampling each frame. The full RGB image can be reconstructed from the Bayer camera by building a dedicated optical design, or by adjusting the demosaicing process to the special format of the enhanced image.
Digital LAMP in a sample self-digitization (SD) chip
Herrick, Alison M.; Dimov, Ivan K.; Lee, Luke P.; Chiu, Daniel T.
2012-01-01
This paper describes the realization of digital loop-mediated DNA amplification (dLAMP) in a sample self-digitization (SD) chip. Digital DNA amplification has become an attractive technique to quantify absolute concentrations of DNA in a sample. While digital polymerase chain reaction is still the most widespread implementation, its use in resource—limited settings is impeded by the need for thermal cycling and robust temperature control. In such situations, isothermal protocols that can amplify DNA or RNA without thermal cycling are of great interest. Here, we showed the successful amplification of single DNA molecules in a stationary droplet array using isothermal digital loop-mediated DNA amplification. Unlike most (if not all) existing methods for sample discretization, our design allows for automated, loss-less digitization of sample volumes on-chip. We demonstrated accurate quantification of relative and absolute DNA concentrations with sample volumes of less than 2 μl. We assessed the homogeneity of droplet size during sample self-digitization in our device, and verified that the size variation was small enough such that straightforward counting of LAMP-active droplets sufficed for data analysis. We anticipate that the simplicity and robustness of our SD chip make it attractive as an inexpensive and easy-to-operate device for DNA amplification, for example in point-of-care settings. PMID:22399016
Standard-less analysis of Zircaloy clad samples by an instrumental neutron activation method
NASA Astrophysics Data System (ADS)
Acharya, R.; Nair, A. G. C.; Reddy, A. V. R.; Goswami, A.
2004-03-01
A non-destructive method for analysis of irregular shape and size samples of Zircaloy has been developed using the recently standardized k0-based internal mono standard instrumental neutron activation analysis (INAA). The samples of Zircaloy-2 and -4 tubes, used as fuel cladding in Indian boiling water reactors (BWR) and pressurized heavy water reactors (PHWR), respectively, have been analyzed. Samples weighing in the range of a few tens of grams were irradiated in the thermal column of Apsara reactor to minimize neutron flux perturbations and high radiation dose. The method utilizes in situ relative detection efficiency using the γ-rays of selected activation products in the sample for overcoming γ-ray self-attenuation. Since the major and minor constituents (Zr, Sn, Fe, Cr and/or Ni) in these samples were amenable to NAA, the absolute concentrations of all the elements were determined using mass balance instead of using the concentration of the internal mono standard. Concentrations were also determined in a smaller size Zircaloy-4 sample by irradiating in the core position of the reactor to validate the present methodology. The results were compared with literature specifications and were found to be satisfactory. Values of sensitivities and detection limits have been evaluated for the elements analyzed.
A systematic review of the efficacy of venlafaxine for the treatment of fibromyalgia.
VanderWeide, L A; Smith, S M; Trinkley, K E
2015-02-01
Fibromyalgia is a painful disease affecting 1-2% of the United States population. Serotonin and norepinephrine reuptake inhibitors (SNRIs), such as duloxetine and milnacipran, are well studied and frequently used for treating this disorder. However, efficacy data are limited for the SNRI venlafaxine despite its use in nearly a quarter of patients with fibromyalgia. Accordingly, we systematically reviewed the efficacy of venlafaxine for treatment of fibromyalgia. PubMed, Web of Science and the Cochrane Database were searched using the terms 'venlafaxine' and 'fibromyalgia'. Results were classified as primary studies or review articles based on abstract review. References of review articles were evaluated to ensure no primary studies evaluating venlafaxine were overlooked. All clinical studies that investigated venlafaxine for the treatment of fibromyalgia were included and graded on strength of evidence. Five studies met the inclusion criteria, including 4 open-label cohort studies and 1 randomized, controlled trial. Study durations ranged from 6 weeks to 6 months, and study sizes ranged from 11 to 102 participants. Four of the five published studies reported improvement in at least one outcome. Generally consistent improvements were observed in pain-related outcome measures, including the Fibromyalgia Impact Questionnaire (range, 26-29% reduction; n = 2 studies), Visual Analog Scale (range, 36-45% reduction; n = 2 studies), McGill Pain Questionnaire (48% reduction; n = 1 study) and Clinical Global Impression scale (51% had significant score change; n = 1 study). However, the few studies identified were limited by small sample size, inconsistent use of outcomes and methodological concerns. Studies assessing the efficacy of venlafaxine in the treatment of fibromyalgia to date have been limited by small sample size, inconsistent venlafaxine dosing, lack of placebo control and lack of blinding. In the context of these limitations, venlafaxine appears to be at least modestly effective in treating fibromyalgia. Larger randomized controlled trials are needed to further elucidate the full benefit of venlafaxine. © 2014 John Wiley & Sons Ltd.
Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A
2017-06-30
Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hainline, Kevin N.; Hickox, Ryan C.; Greene, Jenny E.
2014-05-20
We examine the spatial extent of the narrow-line regions (NLRs) of a sample of 30 luminous obscured quasars at 0.4 < z < 0.7 observed with spatially resolved Gemini-N GMOS long-slit spectroscopy. Using the [O III] λ5007 emission feature, we estimate the size of the NLR using a cosmology-independent measurement: the radius where the surface brightness falls to 10{sup –15} erg s{sup –1} cm{sup –2} arcsec{sup –2}. We then explore the effects of atmospheric seeing on NLR size measurements and conclude that direct measurements of the NLR size from observed profiles are too large by 0.1-0.2 dex on average, asmore » compared to measurements made to best-fit Sérsic or Voigt profiles convolved with the seeing. These data, which span a full order of magnitude in IR luminosity (log (L {sub 8} {sub μm}/erg s{sup –1}) = 44.4-45.4), also provide strong evidence that there is a flattening of the relationship between NLR size and active galactic nucleus luminosity at a seeing-corrected size of ∼7 kpc. The objects in this sample have high luminosities which place them in a previously under-explored portion of the size-luminosity relationship. These results support the existence of a maximal size of the NLR around luminous quasars; beyond this size, there is either not enough gas or the gas is over-ionized and does not produce enough [O III] λ5007 emission.« less
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Carter, James L.; Resh, Vincent H.
2001-01-01
A survey of methods used by US state agencies for collecting and processing benthic macroinvertebrate samples from streams was conducted by questionnaire; 90 responses were received and used to describe trends in methods. The responses represented an estimated 13,000-15,000 samples collected and processed per year. Kicknet devices were used in 64.5% of the methods; other sampling devices included fixed-area samplers (Surber and Hess), artificial substrates (Hester-Dendy and rock baskets), grabs, and dipnets. Regional differences existed, e.g., the 1-m kicknet was used more often in the eastern US than in the western US. Mesh sizes varied among programs but 80.2% of the methods used a mesh size between 500 and 600 (mu or u)m. Mesh size variations within US Environmental Protection Agency regions were large, with size differences ranging from 100 to 700 (mu or u)m. Most samples collected were composites; the mean area sampled was 1.7 m2. Samples rarely were collected using a random method (4.7%); most samples (70.6%) were collected using "expert opinion", which may make data obtained operator-specific. Only 26.3% of the methods sorted all the organisms from a sample; the remainder subsampled in the laboratory. The most common method of subsampling was to remove 100 organisms (range = 100-550). The magnification used for sorting ranged from 1 (sorting by eye) to 30x, which results in inconsistent separation of macroinvertebrates from detritus. In addition to subsampling, 53% of the methods sorted large/rare organisms from a sample. The taxonomic level used for identifying organisms varied among taxa; Ephemeroptera, Plecoptera, and Trichoptera were generally identified to a finer taxonomic resolution (genus and species) than other taxa. Because there currently exists a large range of field and laboratory methods used by state programs, calibration among all programs to increase data comparability would be exceptionally challenging. However, because many techniques are shared among methods, limited testing could be designed to evaluate whether procedural differences affect the ability to determine levels of environmental impairment using benthic macroinvertebrate communities.
Onjong, Hillary Adawo; Wangoh, John; Njage, Patrick Murigu Kamau
2014-08-01
Fish processing plants still face microbial food safety-related product rejections and the associated economic losses, although they implement legislation, with well-established quality assurance guidelines and standards. We assessed the microbial performance of core control and assurance activities of fish exporting processors to offer suggestions for improvement using a case study. A microbiological assessment scheme was used to systematically analyze microbial counts in six selected critical sampling locations (CSLs). Nine small-, medium- and large-sized companies implementing current food safety management systems (FSMS) were studied. Samples were collected three times on each occasion (n = 324). Microbial indicators representing food safety, plant and personnel hygiene, and overall microbiological performance were analyzed. Microbiological distribution and safety profile levels for the CSLs were calculated. Performance of core control and assurance activities of the FSMS was also diagnosed using an FSMS diagnostic instrument. Final fish products from 67% of the companies were within the legally accepted microbiological limits. Salmonella was absent in all CSLs. Hands or gloves of workers from the majority of companies were highly contaminated with Staphylococcus aureus at levels above the recommended limits. Large-sized companies performed better in Enterobacteriaceae, Escherichia coli, and S. aureus than medium- and small-sized ones in a majority of the CSLs, including receipt of raw fish material, heading and gutting, and the condition of the fish processing tables and facilities before cleaning and sanitation. Fish products of 33% (3 of 9) of the companies and handling surfaces of 22% (2 of 9) of the companies showed high variability in Enterobacteriaceae counts. High variability in total viable counts and Enterobacteriaceae was noted on fish products and handling surfaces. Specific recommendations were made in core control and assurance activities associated with sampling locations showing poor performance.
Rethinking non-inferiority: a practical trial design for optimising treatment duration.
Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb
2018-06-01
Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.
Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin
2015-11-01
In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wilcox, Mike
1993-01-01
The number of pixels per unit area sampling an image determines Nyquist resolution. Therefore, the highest pixel density is the goal. Unfortunately, as reduction in pixel size approaches the wavelength of light, sensitivity is lost and noise increases. Animals face the same problems and have achieved novel solutions. Emulating these solutions offers potentially unlimited sensitivity with detector size approaching the diffraction limit. Once an image is 'captured', cellular preprocessing of information allows extraction of high resolution information from the scene. Computer simulation of this system promises hyperacuity for machine vision.
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiser, I; Lu, Z
2014-06-01
Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions includedmore » two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task.« less
Walters, Stephen J; Bonacho Dos Anjos Henriques-Cadby, Inês; Bortolami, Oscar; Flight, Laura; Hind, Daniel; Jacques, Richard M; Knox, Christopher; Nadin, Ben; Rothwell, Joanne; Surtees, Michael; Julious, Steven A
2017-03-20
Substantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope. To review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme. HTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed. Information was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers. Target sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data). This review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43-2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79-97%). There is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Diffraction limited focusing and routing of gap plasmons by a metal-dielectric-metal lens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, Brian S.; Czaplewski, David A.; Haftel, Michael I.
2015-01-01
Passive optical elements can play key roles in photonic applications such as plasmonic integrated circuits. Here we experimentally demonstrate passive gap-plasmon focusing and routing in two-dimensions. This is accomplished using a high numerical-aperture metal-dielectric-metal lens incorporated into a planar-waveguide device. Fabrication via metal sputtering, oxide deposition, electron-and focused-ion-beam lithography, and argon ion-milling is reported on in detail. Diffraction-limited focusing is optically characterized by sampling out-coupled light with a microscope. The measured focal distance and full-width-half-maximum spot size agree well with the calculated lens performance. The surface plasmon polariton propagation length is measured by sampling light from multiple out-coupler slits. (C)more » 2015 Optical Society of America« less
Scalability of transport parameters with pore sizes in isodense disordered media
NASA Astrophysics Data System (ADS)
Reginald, S. William; Schmitt, V.; Vallée, R. A. L.
2014-09-01
We study light multiple scattering in complex disordered porous materials. High internal phase emulsion-based isodense polystyrene foams are designed. Two types of samples, exhibiting different pore size distributions, are investigated for different slab thicknesses varying from L = 1 \\text{mm} to 10 \\text{mm} . Optical measurements combining steady-state and time-resolved detection are used to characterize the photon transport parameters. Very interestingly, a clear scalability of the transport mean free path \\ellt with the average size of the pores S is observed, featuring a constant velocity of the transport energy in these isodense structures. This study strongly motivates further investigations into the limits of validity of this scalability as the scattering strength of the system increases.
Exactly solvable random graph ensemble with extensively many short cycles
NASA Astrophysics Data System (ADS)
Aguirre López, Fabián; Barucca, Paolo; Fekom, Mathilde; Coolen, Anthony C. C.
2018-02-01
We introduce and analyse ensembles of 2-regular random graphs with a tuneable distribution of short cycles. The phenomenology of these graphs depends critically on the scaling of the ensembles’ control parameters relative to the number of nodes. A phase diagram is presented, showing a second order phase transition from a connected to a disconnected phase. We study both the canonical formulation, where the size is large but fixed, and the grand canonical formulation, where the size is sampled from a discrete distribution, and show their equivalence in the thermodynamical limit. We also compute analytically the spectral density, which consists of a discrete set of isolated eigenvalues, representing short cycles, and a continuous part, representing cycles of diverging size.
Effective Ice Particle Densities for Cold Anvil Cirrus
NASA Technical Reports Server (NTRS)
Heymsfield, Andrew J.; Schmitt, Carl G.; Bansemer, Aaron; Baumgardner, Darrel; Weinstock, Elliot M.; Smith, Jessica
2002-01-01
This study derives effective ice particle densities from data collected from the NASA WB-57F aircraft near the tops of anvils during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers (CRYSTAL) Florida Area Cirrus Experiment (FACE) in southern Florida in July 2002. The effective density, defined as the ice particle mass divided by the volume of an equivalent diameter liquid sphere, is obtained for particle populations and single sizes containing mixed particle habits using measurements of condensed water content and particle size distributions. The mean effective densities for populations decrease with increasing slopes of the gamma size distributions fitted to the size distributions. The population-mean densities range from near 0.91 g/cu m to 0.15 g/cu m. Effective densities for single sizes obey a power-law with an exponent of about -0.55, somewhat less steep than found from earlier studies. Our interpretations apply to samples where particle sizes are generally below 200-300 microns in maximum dimension because of probe limitations.
Zipf's law and city size distribution: A survey of the literature and future research agenda
NASA Astrophysics Data System (ADS)
Arshad, Sidra; Hu, Shougeng; Ashraf, Badar Nadeem
2018-02-01
This study provides a systematic review of the existing literature on Zipf's law for city size distribution. Existing empirical evidence suggests that Zipf's law is not always observable even for the upper-tail cities of a territory. However, the controversy with empirical findings arises due to sample selection biases, methodological weaknesses and data limitations. The hypothesis of Zipf's law is more likely to be rejected for the entire city size distribution and, in such case, alternative distributions have been suggested. On the contrary, the hypothesis is more likely to be accepted if better empirical methods are employed and cities are properly defined. The debate is still far from to be conclusive. In addition, we identify four emerging areas in Zipf's law and city size distribution research including the size distribution of lower-tail cities, the size distribution of cities in sub-national regions, the alternative forms of Zipf's law, and the relationship between Zipf's law and the coherence property of the urban system.
ON THE CLUSTERING OF SUBMILLIMETER GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Christina C.; Giavalisco, Mauro; Yun, Min S.
2011-06-01
We measure the angular two-point correlation function of submillimeter galaxies (SMGs) from 1.1 mm imaging of the COSMOS field with the AzTEC camera and ASTE 10 m telescope. These data yield one of the largest contiguous samples of SMGs to date, covering an area of 0.72 deg{sup 2} down to a 1.26 mJy beam{sup -1} (1{sigma}) limit, including 189 (328) sources with S/N {>=}3.5 (3). We can only set upper limits to the correlation length r{sub 0}, modeling the correlation function as a power law with pre-assigned slope. Assuming existing redshift distributions, we derive 68.3% confidence level upper limits ofmore » r{sub 0} {approx}< 6-8h{sup -1} Mpc at 3.7 mJy and r{sub 0} {approx}< 11-12 h{sup -1} Mpc at 4.2 mJy. Although consistent with most previous estimates, these upper limits imply that the real r{sub 0} is likely smaller. This casts doubts on the robustness of claims that SMGs are characterized by significantly stronger spatial clustering (and thus larger mass) than differently selected galaxies at high redshift. Using Monte Carlo simulations we show that even strongly clustered distributions of galaxies can appear unclustered when sampled with limited sensitivity and coarse angular resolution common to current submillimeter surveys. The simulations, however, also show that unclustered distributions can appear strongly clustered under these circumstances. From the simulations, we predict that at our survey depth, a mapped area of 2 deg{sup 2} is needed to reconstruct the correlation function, assuming smaller beam sizes of future surveys (e.g., the Large Millimeter Telescope's 6'' beam size). At present, robust measures of the clustering strength of bright SMGs appear to be below the reach of most observations.« less
Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios
NASA Technical Reports Server (NTRS)
Juarez, Alfredo; Harper, Susana A.
2016-01-01
The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.
On-Chip, Amplification-Free Quantification of Nucleic Acid for Point-of-Care Diagnosis
NASA Astrophysics Data System (ADS)
Yen, Tony Minghung
This dissertation demonstrates three physical device concepts to overcome limitations in point-of-care quantification of nucleic acids. Enabling sensitive, high throughput nucleic acid quantification on a chip, outside of hospital and centralized laboratory setting, is crucial for improving pathogen detection and cancer diagnosis and prognosis. Among existing platforms, microarray have the advantages of being amplification free, low instrument cost, and high throughput, but are generally less sensitive compared to sequencing and PCR assays. To bridge this performance gap, this dissertation presents theoretical and experimental progress to develop a platform nucleic acid quantification technology that is drastically more sensitive than current microarrays while compatible with microarray architecture. The first device concept explores on-chip nucleic acid enrichment by natural evaporation of nucleic acid solution droplet. Using a micro-patterned super-hydrophobic black silicon array device, evaporative enrichment is coupled with nano-liter droplet self-assembly workflow to produce a 50 aM concentration sensitivity, 6 orders of dynamic range, and rapid hybridization time at under 5 minutes. The second device concept focuses on improving target copy number sensitivity, instead of concentration sensitivity. A comprehensive microarray physical model taking into account of molecular transport, electrostatic intermolecular interactions, and reaction kinetics is considered to guide device optimization. Device pattern size and target copy number are optimized based on model prediction to achieve maximal hybridization efficiency. At a 100-mum pattern size, a quantum leap in detection limit of 570 copies is achieved using black silicon array device with self-assembled pico-liter droplet workflow. Despite its merits, evaporative enrichment on black silicon device suffers from coffee-ring effect at 100-mum pattern size, and thus not compatible with clinical patient samples. The third device concept utilizes an integrated optomechanical laser system and a Cytop microarray device to reverse coffee-ring effect during evaporative enrichment at 100-mum pattern size. This method, named "laser-induced differential evaporation" is expected to enable 570 copies detection limit for clinical samples in near future. While the work is ongoing as of the writing of this dissertation, a clear research plan is in place to implement this method on microarray platform toward clinical sample testing for disease applications and future commercialization.
Cognitive impairments in cancer patients represent an important clinical problem. Studies to date estimating prevalence of difficulties in memory, executive function, and attention deficits have been limited by small sample sizes and many have lacked healthy control groups. More information is needed on promising biomarkers and allelic variants that may help to determine the
Is Some Data Better than No Data at All? Evaluating the Utility of Secondary Needs Assessment Data
ERIC Educational Resources Information Center
Shamblen, Stephen R.; Dwivedi, Pramod
2010-01-01
Needs assessments in substance abuse prevention often rely on secondary data measures of consumption and consequences to determine what population subgroup and geographic areas should receive a portion of limited resources. Although these secondary data measures have some benefits (e.g. large sample sizes, lack of survey response biases and cost),…
Estimating the Latent Number of Types in Growing Corpora with Reduced Cost-Accuracy Trade-Off
ERIC Educational Resources Information Center
Hidaka, Shohei
2016-01-01
The number of unique words in children's speech is one of most basic statistics indicating their language development. We may, however, face difficulties when trying to accurately evaluate the number of unique words in a child's growing corpus over time with a limited sample size. This study proposes a novel technique to estimate the latent number…
ERIC Educational Resources Information Center
Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen
2014-01-01
It is a well-known problem in testing the fit of models to multinomial data that the full underlying contingency table will inevitably be sparse for tests of reasonable length and for realistic sample sizes. Under such conditions, full-information test statistics such as Pearson's X[superscript 2]?? and the likelihood ratio statistic…
Military Interoperable Digital Hospital Testbed (MIDHT) Phase II
2011-07-01
personal health records has been limited, resulting in a small sample size to date. Additional providers and a new disease condition ( gestational diabetes ...Syndrome Picture Archive and Communications System User Satisfaction Gestational Diabetes 16. SECURITY CLASSIFICATION OF: 17...Consumer Informatics in the Chronic Care Model: Metabolic Syndrome and Gestational Diabetes in a Rural Setting. This arm focuses on finding innovative
NASA Astrophysics Data System (ADS)
Mangantar Pardamean Sianturi, Markus; Jumilawaty, Erni; Delvian; Hartanto, Adrian
2018-03-01
Blood python (Python brongersmai Stull, 1938) is one of heavily exploited wildlife in Indonesia. The high demands on its skin trade have made its harvesting regulated under quota-based setting by the government to prevent over-harvesting. To gain understanding on the sustainability of P. brongersmai in the wild, biological characters of wild-caught specimens were studied. Samples were collected from two slaughterhouses from Rantau Prapat and Langkat. Parameters measured were morphological (Snout-vent length (SVL), body mass, abdomen width) and anatomical characters (Fat classes). Total samples of P. brongersmai in this research were 541 with 269 male and 272 female snakes. Female snakes had the highest proportion of individuals with the best quality of abdominal fat reserves (Class 3). Linear models are built and tested for its significance in relation between fat classes as anatomical characters and morphological characters. All tested morphological characters were significant in female snakes. By using linear equation models, we generate size limit to prioritize harvesting in the future. We suggest the use of SVL and stomach width ranging between 139,7 – 141,5 cm and 24,72 – 25,71 cm respectively to achieve sustainability of P. brongersmai in the wild.
A critical look at national monitoring programs for birds and other wildlife species
Sauer, J.R.; O'Shea, T.J.; Bogon, M.A.
2003-01-01
Concerns?about declines in numerous taxa have created agreat deal of interest in survey development. Because birds have traditionally been monitored by a variety of methods, bird surveys form natural models for development of surveys for other taxa. Here I suggest that most bird surveys are not appropriate models for survey design. Most lack important design components associated with estimation of population parameters at sample sites or with sampling over space, leading to estimates that may be biased, I discuss the limitations of national bird monitoring programs designed to monitor population size. Although these surveys are often analyzed, careful consideration must be given to factors that may bias estimates but that cannot be evaluated within the survey. Bird surveys with appropriate designs have generally been developed as part of management programs that have specific information needs. Experiences gained from bird surveys provide important information for development of surveys for other taxa, and statistical developments in estimation of population sizes from counts provide new approaches to overcoming the limitations evident in many bird surveys. Design of surveys is a collaborative effort, requiring input from biologists, statisticians, and the managers who will use the information from the surveys.
Li, Peng; Redden, David T.
2014-01-01
SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738
Creel, Scott; Creel, Michael
2009-11-01
1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results predict a substantial reduction in the limiting effect of snow accumulation on Montana elk populations in the coming decades. If other limiting factors do not operate with greater force, population growth rates would increase substantially.
Heavy metals in the finest size fractions of road-deposited sediments.
Lanzerstorfer, Christof
2018-08-01
The concentration of heavy metals in urban road-deposited sediments (RDS) can be used as an indicator for environmental pollution. Thus, their occurrence has been studied in whole road dust samples as well as in size fractions obtained by sieving. Because of the limitations of size separation by sieving little information is available about heavy metal concentrations in the road dust size fractions <20 μm. In this study air classification was applied for separation of dust size fractions smaller than 20 μm from RDS collected at different times during the year. The results showed only small seasonal variations in the heavy metals concentrations and size distribution. According to the Geoaccumulation Index the pollution of the road dust samples deceased in the following order: Sb » As > Cu ≈ Zn > Cr > Cd ≈ Pb ≈ Mn > Ni > Co ≈ V. For all heavy metals the concentration was higher in the fine size fractions compared to the coarse size fractions, while the concentration of Sr was size-independent. The enrichment of the heavy metals in the finest size fraction compared to the whole RDS <200 μm was up to 4.5-fold. The size dependence of the concentration decreased in the following order: Co ≈ Cd > Sb > (Cu) ≈ Zn ≈ Pb > As ≈ V » Mn. The approximation of the size dependence of the concentration as a function of the particle size by power functions worked very well. The correlation between particle size and concentration was high for all heavy metals. The increased heavy metals concentrations in the finest size fractions should be considered in the evaluation of the contribution of road dust re-suspension to the heavy metal contamination of atmospheric dust. Thereby, power functions can be used to describe the size dependence of the concentration. Copyright © 2018 Elsevier Ltd. All rights reserved.
Uddin, Rokon; Burger, Robert; Donolato, Marco; Fock, Jeppe; Creagh, Michael; Hansen, Mikkel Fougt; Boisen, Anja
2016-11-15
We present a biosensing platform for the detection of proteins based on agglutination of aptamer coated magnetic nano- or microbeads. The assay, from sample to answer, is integrated on an automated, low-cost microfluidic disc platform. This ensures fast and reliable results due to a minimum of manual steps involved. The detection of the target protein was achieved in two ways: (1) optomagnetic readout using magnetic nanobeads (MNBs); (2) optical imaging using magnetic microbeads (MMBs). The optomagnetic readout of agglutination is based on optical measurement of the dynamics of MNB aggregates whereas the imaging method is based on direct visualization and quantification of the average size of MMB aggregates. By enhancing magnetic particle agglutination via application of strong magnetic field pulses, we obtained identical limits of detection of 25pM with the same sample-to-answer time (15min 30s) using the two differently sized beads for the two detection methods. In both cases a sample volume of only 10µl is required. The demonstrated automation, low sample-to-answer time and portability of both detection instruments as well as integration of the assay on a low-cost disc are important steps for the implementation of these as portable tools in an out-of-lab setting. Copyright © 2016 Elsevier B.V. All rights reserved.
Fréchette-Viens, Laurie; Hadioui, Madjid; Wilkinson, Kevin J
2017-01-15
The applicability of single particle ICP-MS (SP-ICP-MS) for the analysis of nanoparticle size distributions and the determination of particle numbers was evaluated using the rare earth oxide, La 2 O 3 , as a model particle. The composition of the storage containers, as well as the ICP-MS sample introduction system were found to significantly impact SP-ICP-MS analysis. While La 2 O 3 nanoparticles (La 2 O 3 NP) did not appear to interact strongly with sample containers, adsorptive losses of La 3+ (over 24h) were substantial (>72%) for fluorinated ethylene propylene bottles as opposed to polypropylene (<10%). Furthermore, each part of the sample introduction system (nebulizers made of perfluoroalkoxy alkane (PFA) or glass, PFA capillary tubing, and polyvinyl chloride (PVC) peristaltic pump tubing) contributed to La 3+ adsorptive losses. On the other hand, the presence of natural organic matter in the nanoparticle suspensions led to a decreased adsorptive loss in both the sample containers and the introduction system, suggesting that SP-ICP-MS may nonetheless be appropriate for NP analysis in environmental matrices. Coupling of an ion-exchange resin to the SP-ICP-MS led to more accurate determinations of the La 2 O 3 NP size distributions. Copyright © 2016 Elsevier B.V. All rights reserved.
36 CFR § 1004.11 - Load, weight and size limits.
Code of Federal Regulations, 2013 CFR
2013-07-01
... designate more restrictive limits when appropriate for traffic safety or protection of the road surface. The... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Load, weight and size limits... TRAFFIC SAFETY § 1004.11 Load, weight and size limits. (a) Vehicle load, weight and size limits...
Strength and Deformation Behaviour of Cap Rocks Above the CO2SINK-Reservoir
NASA Astrophysics Data System (ADS)
Mutschler, T.; Triantafyllidis, T.; Balthasar, K.; Norden, B.
2009-04-01
The cap-rock of the CO2SINK storage site close to Ketzin consists of clay rich rocks which are typical for cap rock formations above CO2 storage reservoirs. The strength and deformation behaviour of such claystone samples are therefore of fundamental importance for the characterization of secure geological storage of CO2. The elastic and anelastic deformation behaviour limits the maximum injection pressure during CO2-injection and is part of the security measures for the long term storage of CO2. The laboratory experiments where performed on samples gathered from the injection well of the Ketzin pilot test site in Germany and are compared with the elastic and anelastic behaviour of samples from the same Keuper formation in a near-surface outcrop in the Southwest of Germany showing a similar lithology. The samples from the outcrop allowed drilling of samples with a standard size of 100 mm diameter and 200 mm height as well as large samples with a diameter of 550 mm and a height of 1200 mm. The investigations have a special emphasis on the viscous behaviour of the clay stones and its scaling behaviour. A special triaxial testing procedure is applied both on standard and large size samples allowing the determination of the strength, stiffness and viscosity behaviour of the rock in one experimental run. Multi-stage technique (stepwise variation of the confining pressure) gives the strength behaviour of each single sample while applying a constant deformation rate. Stepwise varied deformation rates on the other hand lead to steps in the stress-strain-curve from which the viscosity index is determined. The viscosity index is directly used in the Norton's constitutive relations for viscoplastic simulations. The combination of tests allows for the determination of a broad range of elastic and anelastic properties. The comparison of results - both for elastic and anelastic behaviour - from standard and large samples shows that for the examined rocks a scale effect is negligible. Transition from cataclastic to non-cataclastic behaviour - the transition limit - occurs in a similar range of applied levels of pressure and deformation rates even at room temperature. The obtained transition limit is very important for the judgment of the sealing capacity and integrity of the cap rock. The deformation rates predicted for the pressure and temperature conditions of the caprock at Ketzin test site are far beneath the determined transition limit during the injection and after stop of injection. As a 0° friction angle is used for pressure and deformation limit at Ketzin, the measured elastic and anelastic behaviour of the real caprock act as additional safety margin during injection and in the post injection phase. As the examined rocks are typical for many possible storage sites, the discussed results are of importance beyond the Ketzin Pilot Experiment CO2SINK.
Burgess, George H.; Bruce, Barry D.; Cailliet, Gregor M.; Goldman, Kenneth J.; Grubbs, R. Dean; Lowe, Christopher G.; MacNeil, M. Aaron; Mollet, Henry F.; Weng, Kevin C.; O'Sullivan, John B.
2014-01-01
White sharks are highly migratory and segregate by sex, age and size. Unlike marine mammals, they neither surface to breathe nor frequent haul-out sites, hindering generation of abundance data required to estimate population size. A recent tag-recapture study used photographic identifications of white sharks at two aggregation sites to estimate abundance in “central California” at 219 mature and sub-adult individuals. They concluded this represented approximately one-half of the total abundance of mature and sub-adult sharks in the entire eastern North Pacific Ocean (ENP). This low estimate generated great concern within the conservation community, prompting petitions for governmental endangered species designations. We critically examine that study and find violations of model assumptions that, when considered in total, lead to population underestimates. We also use a Bayesian mixture model to demonstrate that the inclusion of transient sharks, characteristic of white shark aggregation sites, would substantially increase abundance estimates for the adults and sub-adults in the surveyed sub-population. Using a dataset obtained from the same sampling locations and widely accepted demographic methodology, our analysis indicates a minimum all-life stages population size of >2000 individuals in the California subpopulation is required to account for the number and size range of individual sharks observed at the two sampled sites. Even accounting for methodological and conceptual biases, an extrapolation of these data to estimate the white shark population size throughout the ENP is inappropriate. The true ENP white shark population size is likely several-fold greater as both our study and the original published estimate exclude non-aggregating sharks and those that independently aggregate at other important ENP sites. Accurately estimating the central California and ENP white shark population size requires methodologies that account for biases introduced by sampling a limited number of sites and that account for all life history stages across the species' range of habitats. PMID:24932483
Burgess, George H; Bruce, Barry D; Cailliet, Gregor M; Goldman, Kenneth J; Grubbs, R Dean; Lowe, Christopher G; MacNeil, M Aaron; Mollet, Henry F; Weng, Kevin C; O'Sullivan, John B
2014-01-01
White sharks are highly migratory and segregate by sex, age and size. Unlike marine mammals, they neither surface to breathe nor frequent haul-out sites, hindering generation of abundance data required to estimate population size. A recent tag-recapture study used photographic identifications of white sharks at two aggregation sites to estimate abundance in "central California" at 219 mature and sub-adult individuals. They concluded this represented approximately one-half of the total abundance of mature and sub-adult sharks in the entire eastern North Pacific Ocean (ENP). This low estimate generated great concern within the conservation community, prompting petitions for governmental endangered species designations. We critically examine that study and find violations of model assumptions that, when considered in total, lead to population underestimates. We also use a Bayesian mixture model to demonstrate that the inclusion of transient sharks, characteristic of white shark aggregation sites, would substantially increase abundance estimates for the adults and sub-adults in the surveyed sub-population. Using a dataset obtained from the same sampling locations and widely accepted demographic methodology, our analysis indicates a minimum all-life stages population size of >2000 individuals in the California subpopulation is required to account for the number and size range of individual sharks observed at the two sampled sites. Even accounting for methodological and conceptual biases, an extrapolation of these data to estimate the white shark population size throughout the ENP is inappropriate. The true ENP white shark population size is likely several-fold greater as both our study and the original published estimate exclude non-aggregating sharks and those that independently aggregate at other important ENP sites. Accurately estimating the central California and ENP white shark population size requires methodologies that account for biases introduced by sampling a limited number of sites and that account for all life history stages across the species' range of habitats.
NASA Astrophysics Data System (ADS)
Kamlangkeng, Poramate; Asa, Prateepasen; Mai, Noipitak
2014-06-01
Digital radiographic testing is an acceptable premature nondestructive examination technique. Its performance and limitation comparing to the old technique are still not widely well known. In this paper conducted the study on the comparison of the accuracy of the defect size measurement and film quality obtained from film and digital radiograph techniques by testing in specimens and known size sample defect. Initially, one specimen was built with three types of internal defect; which are longitudinal cracking, lack of fusion, and porosity. For the known size sample defect, it was machined various geometrical size for comparing the accuracy of the measuring defect size to the real size in both film and digital images. To compare the image quality by considering at smallest detectable wire and the three defect images. In this research used Image Quality Indicator (IQI) of wire type 10/16 FE EN BS EN-462-1-1994. The radiographic films were produced by X-ray and gamma ray using Kodak AA400 size 3.5x8 inches, while the digital images were produced by Fuji image plate type ST-VI with 100 micrometers resolution. During the tests, a radiator GE model MF3 was implemented. The applied energy is varied from 120 to 220 kV and the current from 1.2 to 3.0 mA. The intensity of Iridium 192 gamma ray is in the range of 24-25 Curie. Under the mentioned conditions, the results showed that the deviation of the defect size measurement comparing to the real size obtained from the digital image radiographs is below than that of the film digitized, whereas the quality of film digitizer radiographs is higher in comparison.
Improved sample preparation and counting techniques for enhanced tritium measurement sensitivity
NASA Astrophysics Data System (ADS)
Moran, J.; Aalseth, C.; Bailey, V. L.; Mace, E. K.; Overman, C.; Seifert, A.; Wilcox Freeburg, E. D.
2015-12-01
Tritium (T) measurements offer insight to a wealth of environmental applications including hydrologic tracking, discerning ocean circulation patterns, and aging ice formations. However, the relatively short half-life of T (12.3 years) limits its effective age dating range. Compounding this limitation is the decrease in atmospheric T content by over two orders of magnitude (from 1000-2000 TU in 1962 to < 10 TU currently) since the cessation of above ground nuclear testing in the 1960's. We are developing sample preparation methods coupled to direct counting of T via ultra-low background proportional counters which, when combined, offer improved T measurement sensitivity (~4.5 mmoles of H2 equivalent) and will help expand the application of T age dating to smaller sample sizes linked to persistent environmental questions despite the limitations above. For instance, this approach can be used to T date ~ 2.2 mmoles of CH4 collected from sample-limited systems including microbial communities, soils, or subsurface aquifers and can be combined with radiocarbon dating to distinguish the methane's formation age from C age in a system. This approach can also expand investigations into soil organic C where the improved sensitivity will permit resolution of soil C into more descriptive fractions and provide direct assessments of the stability of specific classes of organic matter in soils environments. We are employing a multiple step sample preparation system whereby organic samples are first combusted with resulting CO2 and H2O being used as a feedstock to synthesize CH4. This CH4 is mixed with Ar and loaded directly into an ultra-low background proportional counter for measurement of T β decay in a shallow underground laboratory. Analysis of water samples requires only the addition of geologic CO2 feedstock with the sample for methane synthesis. The chemical nature of the preparation techniques enable high sample throughput with only the final measurement requiring T decay with total sample analysis time ranging from 2 -5 weeks depending on T content.
Steep discounting of delayed monetary and food rewards in obesity: a meta-analysis.
Amlung, M; Petker, T; Jackson, J; Balodis, I; MacKillop, J
2016-08-01
An increasing number of studies have investigated delay discounting (DD) in relation to obesity, but with mixed findings. This meta-analysis synthesized the literature on the relationship between monetary and food DD and obesity, with three objectives: (1) to characterize the relationship between DD and obesity in both case-control comparisons and continuous designs; (2) to examine potential moderators, including case-control v. continuous design, money v. food rewards, sample sex distribution, and sample age (18 years); and (3) to evaluate publication bias. From 134 candidate articles, 39 independent investigations yielded 29 case-control and 30 continuous comparisons (total n = 10 278). Random-effects meta-analysis was conducted using Cohen's d as the effect size. Publication bias was evaluated using fail-safe N, Begg-Mazumdar and Egger tests, meta-regression of publication year and effect size, and imputation of missing studies. The primary analysis revealed a medium effect size across studies that was highly statistically significant (d = 0.43, p < 10-14). None of the moderators examined yielded statistically significant differences, although notably larger effect sizes were found for studies with case-control designs, food rewards and child/adolescent samples. Limited evidence of publication bias was present, although the Begg-Mazumdar test and meta-regression suggested a slightly diminishing effect size over time. Steep DD of food and money appears to be a robust feature of obesity that is relatively consistent across the DD assessment methodologies and study designs examined. These findings are discussed in the context of research on DD in drug addiction, the neural bases of DD in obesity, and potential clinical applications.
Gochfeld, Michael; Burger, Joanna; Jeitner, Christian; Donio, Mark; Pittfield, Taryn
2014-01-01
We examined total mercury and selenium levels in muscle of striped bass (Morone saxatilis) collected from 2005 to 2008 from coastal New Jersey. Of primary interest was whether there were differences in mercury and selenium levels as a function of size and location, and whether the legal size limits increased the exposure of bass consumers to mercury. We obtained samples mainly from recreational anglers, but also by seine and trawl. For the entire sample (n = 178 individual fish), the mean (± standard error) for total mercury was 0.39 ± 0.02 μg/g (= 0.39 ppm, wet weight basis) with a maximum of 1.3 μg/g (= 1.3 ppm wet weight). Mean selenium level was 0.30 ± 0.01 μg/g (w/w) with a maximum of 0.9 μg/g). Angler-caught fish (n = 122) were constrained by legal size limits to exceed 61 cm (24 in.) and averaged 72.6 ± 1.3 cm long; total mercury averaged 0.48 ± 0.021 μg/g and selenium averaged 0.29 ± 0.01 μg/g. For comparable sizes, angler-caught fish had significantly higher mercury levels (0.3 vs 0.21 μg/g) than trawled fish. In both the total and angler-only samples, mercury was strongly correlated with length (Kendall tau = 0.37; p < 0.0001) and weight (0.38; p < 0.0001), but was not correlated with condition or with selenium. In the whole sample and all subsamples, total length yielded the highest r2 (up to 0.42) of any variable for both mercury and selenium concentrations. Trawled fish from Long Branch in August and Sandy Hook in October were the same size (68.9 vs 70.1 cm) and had the same mercury concentrations (0.22 vs 0.21 ppm), but different selenium levels (0.11 vs 0.28 ppm). The seined fish (all from Delaware Bay) had the same mercury concentration as the trawled fish from the Atlantic coast despite being smaller. Angler-caught fish from the North (Sandy Hook) were larger but had significantly lower mercury than fish from the South (mainly Cape May). Selenium levels were high in small fish, low in medium-sized fish, and increased again in larger fish, but overall selenium was correlated with length (tau = 0.14; p = 0.006) and weight (tau = 0.27; p < 0.0001). Length-squared contributed significantly to selenium models, reflecting the non-linear relationship. Inter-year differences were explained partly by differences in sizes. The selenium:mercury molar ratio was below 1:1 in 20% of the fish and 25% of the angler-caught fish. Frequent consumption of large striped bass can result in exposure above the EPA’s reference dose, a problem particularly for fetal development. PMID:22226733
Thompson, Drew; Chen, Sheng-Chieh; Wang, Jing; Pui, David Y.H.
2015-01-01
Recent animal studies have shown that carbon nanotubes (CNTs) may pose a significant health risk to those exposed in the workplace. To further understand this potential risk, effort must be taken to measure the occupational exposure to CNTs. Results from an assessment of potential exposure to multi-walled carbon nanotubes (MWCNTs) conducted at an industrial facility where polymer nanocomposites were manufactured by an extrusion process are presented. Exposure to MWCNTs was quantified by the thermal-optical analysis for elemental carbon (EC) of respirable dust collected by personal sampling. All personal respirable samples collected (n = 8) had estimated 8-h time weighted average (TWA) EC concentrations below the limit of detection for the analysis which was about one-half of the recommended exposure limit for CNTs, 1 µg EC/m3 as an 8-h TWA respirable mass concentration. Potential exposure sources were identified and characterized by direct-reading instruments and area sampling. Area samples analyzed for EC yielded quantifiable mass concentrations inside an enclosure where unbound MWCNTs were handled and near a pelletizer where nanocomposite was cut, while those analyzed by electron microscopy detected the presence of MWCNTs at six locations throughout the facility. Through size selective area sampling it was identified that the airborne MWCNTs present in the workplace were in the form of large agglomerates. This was confirmed by electron microscopy where most of the MWCNT structures observed were in the form of micrometer-sized ropey agglomerates. However, a small fraction of single, free MWCNTs was also observed. It was found that the high number concentrations of nanoparticles, ~200000 particles/cm3, present in the manufacturing facility were likely attributable to polymer fumes produced in the extrusion process. PMID:26209597
NASA Astrophysics Data System (ADS)
de Andrade, Jailson B.; Tanner, Roger L.
A method is described for the specific collection of formaldehyde as hydroxymethanesulfonate on bisulfate-coated cellulose filters. Following extraction in aqueous acid and removal on unreacted bisulfite, the hydroxymethanesulfonate is decomposed by base, and HCHO is determined by DNPH (2,4-dinitrophenylhydrazine) derivatization and HPLC. Since the collection efficiency for formaldehyde is moderately high even when sampling ambient air at high-volume flow rates, a limit of detection of 0.2 ppbv is achieved with 30 min sampling times. Interference from acetaldehyde co-collected as 1-hydroxyethanesulfonate is <5% using this procedure. The technique shows promise for both short-term airborne sampling, and as a means of collecting mg-sized samples of HCHO on an inorganic matrix for carbon isotopic analyses.
Kowalski, Thomas; Siddiqui, Ali; Loren, David; Mertz, Howard R; Mallat, Damien; Haddad, Nadim; Malhotra, Nidhi; Sadowski, Brett; Lybik, Mark J; Patel, Sandeep N; Okoh, Emuejevoke; Rosenkranz, Laura; Karasik, Michael; Golioto, Michael; Linder, Jeffrey; Catalano, Marc F; Al-Haddad, Mohammad A
2016-09-01
To examine the utility of integrated molecular pathology (IMP) in managing surveillance of pancreatic cysts based on outcomes and analysis of false negatives (FNs) from a previously published cohort (n=492). In endoscopic ultrasound with fine-needle aspiration (EUS-FNA) of cyst fluid lacking malignant cytology, IMP demonstrated better risk stratification for malignancy at approximately 3 years' follow-up than International Consensus Guideline (Fukuoka) 2012 management recommendations in such cases. Patient outcomes and clinical features of Fukuoka and IMP FN cases were reviewed. Practical guidance for appropriate surveillance intervals and surgery decisions using IMP were derived from follow-up data, considering EUS-FNA sampling limitations and high-risk clinical circumstances observed. Surveillance intervals for patients based on IMP predictive value were compared with those of Fukuoka. Outcomes at follow-up for IMP low-risk diagnoses supported surveillance every 2 to 3 years, independent of cyst size, when EUS-FNA sampling limitations or high-risk clinical circumstances were absent. In 10 of 11 patients with FN IMP diagnoses (2% of cohort), EUS-FNA sampling limitations existed; Fukuoka identified high risk in 9 of 11 cases. In 4 of 6 FN cases by Fukuoka (1% of cohort), IMP identified high risk. Overall, 55% of cases had possible sampling limitations and 37% had high-risk clinical circumstances. Outcomes support more cautious management in such cases when using IMP. Adjunct use of IMP can provide evidence for relaxed surveillance of patients with benign cysts that meet Fukuoka criteria for closer observation or surgery. Although infrequent, FN results with IMP can be associated with EUS-FNA sampling limitations or high-risk clinical circumstances.
STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.
2010-09-02
Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less
36 CFR 1004.11 - Load, weight and size limits.
Code of Federal Regulations, 2012 CFR
2012-07-01
... limits when appropriate for traffic safety or protection of the road surface. The Board may require a... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Load, weight and size limits... § 1004.11 Load, weight and size limits. (a) Vehicle load, weight and size limits established by State law...
36 CFR 1004.11 - Load, weight and size limits.
Code of Federal Regulations, 2014 CFR
2014-07-01
... limits when appropriate for traffic safety or protection of the road surface. The Board may require a... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Load, weight and size limits... § 1004.11 Load, weight and size limits. (a) Vehicle load, weight and size limits established by State law...
36 CFR 1004.11 - Load, weight and size limits.
Code of Federal Regulations, 2011 CFR
2011-07-01
... limits when appropriate for traffic safety or protection of the road surface. The Board may require a... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Load, weight and size limits... § 1004.11 Load, weight and size limits. (a) Vehicle load, weight and size limits established by State law...
Invited Review Small is beautiful: The analysis of nanogram-sized astromaterials
NASA Astrophysics Data System (ADS)
Zolensky, M. E.; Pieters, C.; Clark, B.; Papike, J. J.
2000-01-01
The capability of modern methods to characterize ultra-small samples is well established from analysis of interplanetary dust particles (IDPs), interstellar grains recovered from meteorites, and other materials requiring ultra-sensitive analytical capabilities. Powerful analytical techniques are available that require, under favorable circumstances, single particles of only a few nanograms for entire suites of fairly comprehensive characterizations. A returned sample of >1,000 particles with total mass of just one microgram permits comprehensive quantitative geochemical measurements that are impractical to carry out in situ by flight instruments. The main goal of this paper is to describe the state-of-the-art in microanalysis of astromaterials. Given that we can analyze fantastically small quantities of asteroids and comets, etc., we have to ask ourselves how representative are microscopic samples of bodies that measure a few to many km across? With the Galileo flybys of Gaspra and Ida, it is now recognized that even very small airless bodies have indeed developed a particulate regolith. Acquiring a sample of the bulk regolith, a simple sampling strategy, provides two critical pieces of information about the body. Regolith samples are excellent bulk samples since they normally contain all the key components of the local environment, albeit in particulate form. Furthermore, since this fine fraction dominates remote measurements, regolith samples also provide information about surface alteration processes and are a key link to remote sensing of other bodies. Studies indicate that a statistically significant number of nanogram-sized particles should be able to characterize the regolith of a primitive asteroid, although the presence of larger components within even primitive meteorites (e.g.. Murchison), e.g. chondrules, CAI, large crystal fragments, etc., points out the limitations of using data obtained from nanogram-sized samples to characterize entire primitive asteroids. However, most important asteroidal geological processes have left their mark on the matrix, since this is the finest-grained portion and therefore most sensitive to chemical and physical changes. Thus, the following information can be learned from this fine grain size fraction alone: (1) mineral paragenesis; (2) regolith processes, (3) bulk composition; (4) conditions of thermal and aqueous alteration (if any); (5) relationships to planets, comets, meteorites (via isotopic analyses, including oxygen; (6) abundance of water and hydrated material; (7) abundance of organics; (8) history of volatile mobility, (9) presence and origin of presolar and/or interstellar material. Most of this information can even be obtained from dust samples from bodies for which nanogram-sized samples are not truly representative. Future advances in sensitivity and accuracy of laboratory analytical techniques can be expected to enhance the science value of nano- to microgram sized samples even further. This highlights a key advantage of sample returns - that the most advanced analysis techniques can always be applied in the laboratory, and that well-preserved samples are available for future investigations.
Subattomole sensitivity in biological accelerator mass spectrometry.
Salehpour, Mehran; Possnert, Göran; Bryhni, Helge
2008-05-15
The Uppsala University 5 MV Pelletron tandem accelerator has been used to study (14)C-labeled biological samples utilizing accelerator mass spectrometry (AMS) technology. We have adapted a sample preparation method for small biological samples down to a few tens of micrograms of carbon, involving among others, miniaturizing of the graphitization reactor. Standard AMS requires about 1 mg of carbon with a limit of quantitation of about 10 amol. Results are presented for a range of small sample sizes with concentrations down to below 1 pM of a pharmaceutical substance in human blood. It is shown that (14)C-labeled molecular markers can be routinely measured from the femtomole range down to a few hundred zeptomole (10 (-21) mol), without the use of any additional separation methods.
Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin
2017-06-01
A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.
Replacement of filters for respirable quartz measurement in coal mine dust by infrared spectroscopy.
Farcas, Daniel; Lee, Taekhee; Chisholm, William P; Soo, Jhy-Charm; Harper, Martin
2016-01-01
The objective of this article is to compare and characterize nylon, polypropylene (PP), and polyvinyl chloride (PVC) membrane filters that might be used to replace the vinyl/acrylic co-polymer (DM-450) filter currently used in the Mine Safety and Health Administration (MSHA) P-7 method (Quartz Analytical Method) and the National Institute for Occupational Safety and Health (NIOSH) Manual of Analytical Methods 7603 method (QUARTZ in coal mine dust, by IR re-deposition). This effort is necessary because the DM-450 filters are no longer commercially available. There is an impending shortage of DM-450 filters. For example, the MSHA Pittsburgh laboratory alone analyzes annually approximately 15,000 samples according to the MSHA P-7 method that requires DM-450 filters. Membrane filters suitable for on-filter analysis should have high infrared (IR) transmittance in the spectral region 600-1000 cm(-1). Nylon (47 mm, 0.45 µm pore size), PP (47 mm, 0.45 µm pore size), and PVC (47 mm, 5 µm pore size) filters meet this specification. Limits of detection and limits of quantification were determined from Fourier transform infrared spectroscopy (FTIR) measurements of blank filters. The average measured quartz mass and coefficient of variation were determined from test filters spiked with respirable α-quartz following MSHA P-7 and NIOSH 7603 methods. Quartz was also quantified in samples of respirable coal dust on each test filter type using the MSHA and NIOSH analysis methods. The results indicate that PP and PVC filters may replace the DM-450 filters for quartz measurement in coal dust by FTIR. PVC filters of 5 µm pore size seemed to be suitable replacement although their ability to retain small particulates should be checked by further experiment.
HoloGondel: in situ cloud observations on a cable car in the Swiss Alps using a holographic imager
NASA Astrophysics Data System (ADS)
Beck, Alexander; Henneberger, Jan; Schöpfer, Sarah; Fugal, Jacob; Lohmann, Ulrike
2017-02-01
In situ observations of cloud properties in complex alpine terrain where research aircraft cannot sample are commonly conducted at mountain-top research stations and limited to single-point measurements. The HoloGondel platform overcomes this limitation by using a cable car to obtain vertical profiles of the microphysical and meteorological cloud parameters. The main component of the HoloGondel platform is the HOLographic Imager for Microscopic Objects (HOLIMO 3G), which uses digital in-line holography to image cloud particles. Based on two-dimensional images the microphysical cloud parameters for the size range from small cloud particles to large precipitation particles are obtained for the liquid and ice phase. The low traveling velocity of a cable car on the order of 10 m s-1 allows measurements with high spatial resolution; however, at the same time it leads to an unstable air speed towards the HoloGondel platform. Holographic cloud imagers, which have a sample volume that is independent of the air speed, are therefore well suited for measurements on a cable car. Example measurements of the vertical profiles observed in a liquid cloud and a mixed-phase cloud at the Eggishorn in the Swiss Alps in the winters 2015 and 2016 are presented. The HoloGondel platform reliably observes cloud droplets larger than 6.5 µm, partitions between cloud droplets and ice crystals for a size larger than 25 µm and obtains a statistically significantly size distribution for every 5 m in vertical ascent.
Otero, Jorge; Guerrero, Hector; Gonzalez, Laura; Puig-Vidal, Manel
2012-01-01
The time required to image large samples is an important limiting factor in SPM-based systems. In multiprobe setups, especially when working with biological samples, this drawback can make impossible to conduct certain experiments. In this work, we present a feedfordward controller based on bang-bang and adaptive controls. The controls are based in the difference between the maximum speeds that can be used for imaging depending on the flatness of the sample zone. Topographic images of Escherichia coli bacteria samples were acquired using the implemented controllers. Results show that to go faster in the flat zones, rather than using a constant scanning speed for the whole image, speeds up the imaging process of large samples by up to a 4× factor. PMID:22368491
Characteristics of Qualitative Descriptive Studies: A Systematic Review
Kim, Hyejin; Sefcik, Justine S.; Bradway, Christine
2016-01-01
Qualitative description (QD) is a term that is widely used to describe qualitative studies of health care and nursing-related phenomena. However, limited discussions regarding QD are found in the existing literature. In this systematic review, we identified characteristics of methods and findings reported in research articles published in 2014 whose authors identified the work as QD. After searching and screening, data were extracted from the sample of 55 QD articles and examined to characterize research objectives, design justification, theoretical/philosophical frameworks, sampling and sample size, data collection and sources, data analysis, and presentation of findings. In this review, three primary findings were identified. First, despite inconsistencies, most articles included characteristics consistent with limited, available QD definitions and descriptions. Next, flexibility or variability of methods was common and desirable for obtaining rich data and achieving understanding of a phenomenon. Finally, justification for how a QD approach was chosen and why it would be an appropriate fit for a particular study was limited in the sample and, therefore, in need of increased attention. Based on these findings, recommendations include encouragement to researchers to provide as many details as possible regarding the methods of their QD study so that readers can determine whether the methods used were reasonable and effective in producing useful findings. PMID:27686751
2014-01-01
The fibrogenicity and carcinogenicity of asbestos fibers are dependent on several fiber parameters including fiber dimensions. Based on the WHO (World Health Organization) definition, the current regulations focalise on long asbestos fibers (LAF) (Length: L ≥ 5 μm, Diameter: D < 3 μm and L/D ratio > 3). However air samples contain short asbestos fibers (SAF) (L < 5 μm). In a recent study we found that several air samples collected in buildings with asbestos containing materials (ACM) were composed only of SAF, sometimes in a concentration of ≥10 fibers.L−1. This exhaustive review focuses on available information from peer-review publications on the size-dependent pathogenetic effects of asbestos fibers reported in experimental in vivo and in vitro studies. In the literature, the findings that SAF are less pathogenic than LAF are based on experiments where a cut-off of 5 μm was generally made to differentiate short from long asbestos fibers. Nevertheless, the value of 5 μm as the limit for length is not based on scientific evidence, but is a limit for comparative analyses. From this review, it is clear that the pathogenicity of SAF cannot be completely ruled out, especially in high exposure situations. Therefore, the presence of SAF in air samples appears as an indicator of the degradation of ACM and inclusion of their systematic search should be considered in the regulation. Measurement of these fibers in air samples will then make it possible to identify pollution and anticipate health risk. PMID:25043725
Demchenko, Natalia L; Chapman, John W; Durkina, Valentina B; Fadeev, Valeriy I
2016-01-01
Ampelisca eschrichtii are among the most important prey of the Western North Pacific gray whales, Eschrichtius robustus. The largest and densest known populations of this amphipod occur in the gray whale's Offshore feeding area on the Northeastern Sakhalin Island Shelf. The remote location, ice cover and stormy weather at the Offshore area have prevented winter sampling. The incomplete annual sampling has confounded efforts to resolve life history and production of A. eschrichtii. Expanded comparisons of population size structure and individual reproductive development between late spring and early fall over six sampling years between 2002 and 2013 however, reveal that A. eschrichtii are gonochoristic, iteroparous, mature at body lengths greater than 15 mm and have a two-year life span. The low frequencies of brooding females, the lack of early stage juveniles, the lack of individual or population growth or biomass increases over late spring and summer, all indicate that growth and reproduction occur primarily in winter, when sampling does not occur. Distinct juvenile and adult size cohorts additionally indicate growth and juvenile production occurs in winter through spring under ice cover. Winter growth thus requires that winter detritus or primary production are critical food sources for these ampeliscid populations and yet, the Offshore area and the Eastern Sakhalin Shelf ampeliscid communities may be the most abundant and productive amphipod population in the world. These A. eschrichtii populations are unlikely to be limited by western gray whale predation. Whether benthic community structure can limit access and foraging success of western gray whales is unclear.
A Laboratory Experiment for the Statistical Evaluation of Aerosol Retrieval (STEAR) Algorithms
NASA Astrophysics Data System (ADS)
Schuster, G. L.; Espinosa, R.; Ziemba, L. D.; Beyersdorf, A. J.; Rocha Lima, A.; Anderson, B. E.; Martins, J. V.; Dubovik, O.; Ducos, F.; Fuertes, D.; Lapyonok, T.; Shook, M.; Derimian, Y.; Moore, R.
2016-12-01
We have developed a method for validating Aerosol Robotic Network (AERONET) retrieval algorithms by mimicking atmospheric extinction and radiance measurements in a laboratory experiment. This enables radiometric retrievals that utilize the same sampling volumes, relative humidities, and particle size ranges as observed by other in situ instrumentation in the experiment. We utilize three Cavity Attenuated Phase Shift (CAPS) monitors for extinction and UMBC's three-wavelength Polarized Imaging Nephelometer (PI-Neph) for angular scattering measurements. We subsample the PI-Neph radiance measurements to angles that correspond to AERONET almucantar scans, with solar zenith angles ranging from 50 to 77 degrees. These measurements are then used as input to the Generalized Retrieval of Aerosol and Surface Properties (GRASP) algorithm, which retrieves size distributions, complex refractive indices, single-scatter albedos (SSA), and lidar ratios for the in situ samples. We obtained retrievals with residuals R < 10% for 100 samples. The samples that we tested include Arizona Test Dust, Arginotec NX, Senegal clay, Israel clay, montmorillonite, hematite, goethite, volcanic ash, ammonium nitrate, ammonium sulfate, and fullerene soot. Samples were alternately dried or humidified, and size distributions were limited to diameters of 1.0 or 2.5 um by using a cyclone. The SSA at 532 nm for these samples ranged from 0.59 to 1.00 when computed with CAPS extinction and PSAP absorption measurements. The GRASP retrieval provided SSAs that are highly correlated with the in situ SSAs, and the correlation coefficients ranged from 0.955 to 0.976, depending upon the simulated solar zenith angle. The GRASP SSAs exhibited an average absolute bias of +0.023 +/-0.01 with respect to the extinction and absorption measurements for the entire dataset. Although our apparatus was not capable of measuring backscatter lidar ratio, we did measure bistatic lidar ratios at a scattering angle of 173 deg. The GRASP bistatic lidar ratios had correlations of 0.488 to 0.735 (depending upon simulated SZA) with respect to in situ measurements, positive relative biases of 6-10%, and average absolute biases of 4.0-6.6 sr. We also compared the GRASP size distributions to aerodynamic particle size measurements.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Dolphin and Wahoo Fishery Off the Atlantic States § 622.275 Size limits. All size limits in this section are minimum size...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Dolphin and Wahoo Fishery Off the Atlantic States § 622.275 Size limits. All size limits in this section are minimum size...
Code of Federal Regulations, 2013 CFR
2013-10-01
..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Shrimp Fishery of the Gulf of Mexico § 622.56 Size limits. Shrimp not in compliance with the applicable size limit as... shrimp harvested in the Gulf EEZ are subject to the minimum-size landing and possession limits of...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Shrimp Fishery of the Gulf of Mexico § 622.56 Size limits. Shrimp not in compliance with the applicable size limit as... shrimp harvested in the Gulf EEZ are subject to the minimum-size landing and possession limits of...
NASA Astrophysics Data System (ADS)
Milliere, L.; Maskasheva, K.; Laurent, C.; Despax, B.; Boudou, L.; Teyssedre, G.
2016-01-01
The aim of this work is to limit charge injection from a semi-conducting electrode into low density polyethylene (LDPE) under dc field by tailoring the polymer surface using a silver nanoparticles-containing layer. The layer is composed of a plane of silver nanoparticles embedded in a semi-insulating organosilicon matrix deposited on the polyethylene surface by a plasma process. Size, density and surface coverage of the nanoparticles are controlled through the plasma process. Space charge distribution in 300 μm thick LDPE samples is measured by the pulsed-electroacoustic technique following a short term (step-wise voltage increase up to 50 kV mm-1, 20 min in duration each, followed by a polarity inversion) and a longer term (up to 12 h under 40 kV mm-1) protocols for voltage application. A comparative study of space charge distribution between a reference polyethylene sample and the tailored samples is presented. It is shown that the barrier effect depends on the size distribution and the surface area covered by the nanoparticles: 15 nm (average size) silver nanoparticles with a high surface density but still not percolating form an efficient barrier layer that suppress charge injection. It is worthy to note that charge injection is detected for samples tailored with (i) percolating nanoparticles embedded in organosilicon layer; (ii) with organosilicon layer only, without nanoparticles and (iii) with smaller size silver particles (<10 nm) embedded in organosilicon layer. The amount of injected charges in the tailored samples increases gradually in the samples ranking given above. The mechanism of charge injection mitigation is discussed on the basis of complementary experiments carried out on the nanocomposite layer such as surface potential measurements. The ability of silver clusters to stabilize electrical charges close to the electrode thereby counterbalancing the applied field appears to be a key factor in explaining the charge injection mitigation effect.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Fu, Jiaqi; Zhang, Xu; Qian, Shahua; Zhang, Lin
2012-05-30
A united method for speciation analysis of Se (IV) and Se (VI) in environmental water samples was developed using nano-sized TiO(2) colloid as adsorbent and hydride generation atomic fluorescence spectrometry (HG-AFS) as determination means. When the pH values of bulk solution were between 6.0 and 7.0, successful adsorption onto 1 mL nano-sized TiO(2) colloid (0.2%) was achieved for more than 97.0% of Se (IV) while Se (VI) barely got adsorbed. Therefore, the method made it possible to preconcentrate and determine Se (IV) and Se (VI) separately. The precipitated TiO(2) with concentrated selenium was directly converted to colloid without desorption. Selenium in the resulting colloid was then determined by HG-AFS. The detection limits (3σ) and relative standard deviations (R.S.D) of this method were 24 ng/L and 42 ng/L, 7.8% (n=6) and 7.0% (n=6) for Se (IV) and Se (VI), respectively. This simple, sensitive, and united method was successfully applied to the separation and speciation of ultra-trace Se (IV) and Se (VI) in environmental water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Hartmann, Georg; Schuster, Michael
2013-01-25
The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 ± 0.06 (particle size 2 nm) to 0.52 ± 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L(-1) is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L(-1). The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L(-1) is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Disk Density Tuning of a Maximal Random Packing.
Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A
2016-08-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.
Liu, Chao; Xue, Chundong; Chen, Xiaodong; Shan, Lei; Tian, Yu; Hu, Guoqing
2015-06-16
Viscoelasticity-induced particle migration has recently received increasing attention due to its ability to obtain high-quality focusing over a wide range of flow rates. However, its application is limited to low throughput regime since the particles can defocus as flow rate increases. Using an engineered carrier medium with constant and low viscosity and strong elasticity, the sample flow rates are improved to be 1 order of magnitude higher than those in existing studies. Utilizing differential focusing of particles of different sizes, here, we present sheathless particle/cell separation in simple straight microchannels that possess excellent parallelizability for further throughput enhancement. The present method can be implemented over a wide range of particle/cell sizes and flow rates. We successfully separate small particles from larger particles, MCF-7 cells from red blood cells (RBCs), and Escherichia coli (E. coli) bacteria from RBCs in different straight microchannels. The proposed method could broaden the applications of viscoelastic microfluidic devices to particle/cell separation due to the enhanced sample throughput and simple channel design.
In vivo imaging of cancer cell size and cellularity using temporal diffusion spectroscopy.
Jiang, Xiaoyu; Li, Hua; Xie, Jingping; McKinley, Eliot T; Zhao, Ping; Gore, John C; Xu, Junzhong
2017-07-01
A temporal diffusion MRI spectroscopy based approach has been developed to quantify cancer cell size and density in vivo. A novel imaging microstructural parameters using limited spectrally edited diffusion (IMPULSED) method selects a specific limited diffusion spectral window for an accurate quantification of cell sizes ranging from 10 to 20 μm in common solid tumors. In practice, it is achieved by a combination of a single long diffusion time pulsed gradient spin echo (PGSE) and three low-frequency oscillating gradient spin echo (OGSE) acquisitions. To validate our approach, hematoxylin and eosin staining and immunostaining of cell membranes, in concert with whole slide imaging, were used to visualize nuclei and cell boundaries, and hence, enabled accurate estimates of cell size and cellularity. Based on a two compartment model (incorporating intra- and extracellular spaces), accurate estimates of cell sizes were obtained in vivo for three types of human colon cancers. The IMPULSED-derived apparent cellularities showed a stronger correlation (r = 0.81; P < 0.0001) with histology-derived cellularities than conventional ADCs (r = -0.69; P < 0.03). The IMPULSED approach samples a specific region of temporal diffusion spectra with enhanced sensitivity to length scales of 10-20 μm, and enables measurements of cell sizes and cellularities in solid tumors in vivo. Magn Reson Med 78:156-164, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Synthesis and characterization of pore size-tunable magnetic mesoporous silica nanoparticles.
Zhang, Jixi; Li, Xu; Rosenholm, Jessica M; Gu, Hong-chen
2011-09-01
Magnetic mesoporous silica nanoparticles (M-MSNs) are emerging as one of the most appealing candidates for theranostic carriers. Herein, a simple synthesis method of M-MSNs with a single Fe(3)O(4) nanocrystal core and a mesoporous shell with radially aligned pores was elaborated using tetraethyl orthosilicate (TEOS) as silica source, cationic surfactant CTAB as template, and 1,3,5-triisopropylbenzene (TMB)/decane as pore swelling agents. Due to the special localization of TMB during the synthesis process, the pore size was increased with added TMB amount within a limited range, while further employment of TMB lead to severe particle coalescence and not well-developed pore structure. On the other hand, when a proper amount of decane was jointly incorporated with limited amounts of TMB, effective pore expansion of M-MSNs similar to that of analogous mesoporous silica nanoparticles was realized. The resultant M-MSN materials possessed smaller particle size (about 40-70 nm in diameter), tunable pore sizes (3.8-6.1 nm), high surface areas (700-1100 m(2)/g), and large pore volumes (0.44-1.54 cm(3)/g). We also demonstrate their high potential in conventional DNA loading. Maximum loading capacity of salmon sperm DNA (375 mg/g) was obtained by the use of the M-MSN sample with the largest pore size of 6.1 nm. Copyright © 2011 Elsevier Inc. All rights reserved.
Andrade, G C R M; Monteiro, S H; Francisco, J G; Figueiredo, L A; Botelho, R G; Tornisielo, V L
2015-05-15
A quick and sensitive liquid chromatography-electrospray ionization tandem mass spectrometry method, using dynamic multiple reaction monitoring and a 1.8-μm particle size analytical column, was developed to determine 57 pesticides in tomato in a 13-min run. QuEChERS (quick, easy, cheap, effective, rugged, and safe) method for samples preparations and validations was carried out in compliance with EU SANCO guidelines. The method was applied to 58 tomato samples. More than 84% of the compounds investigated showed limits of detection equal to or lower than 5 mg kg(-1). A mild (<20%), medium (20-50%), and strong (>50%) matrix effect was observed for 72%, 25%, and 3% of the pesticides studied, respectively. Eighty-one percent of the pesticides showed recoveries ranging between 70% and 120%. Twelve pesticides were detected in 35 samples, all below the maximum residue levels permitted in the Brazilian legislation; 15 samples exceeded the maximum residue levels established by the EU legislation for methamidophos; and 10 exceeded limits for acephate and four for bromuconazole. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lindley, C.E.; Burkhardt, M.R.; DeRusseau, S.N.
1994-01-01
Organic explosives are determined in samples of ground water and surface water with emphasis on identifying and quantifying trinitrotoluene (TNT) metabolites. Water samples are filtered to remove suspended particulate material and passed through a polystyrene divinylbenzene-packed cartridge by a vacuum-extraction system. The target analytes subsequently are eluted with acetonitrile. A high-performance liquid chromatograph (HPLC) equipped with a photodiode-array detector is used for sample analysis. Analytes are separated on an octadecylsilane column using a methanol, water, and acetonitrile gradient elution. The compounds 2,4- and 2,6-dinitrotoluene are separated through an independent, isocratic elution. Method detection limits, on the basis of a 1-liter sample size, range from 0.11 to 0.32 microgram per liter. Recoveries averaged from 71 to 101 percent for 13 analytes in one set of HPLC-grade water fortified at about 1 microgram per liter. The method is limited to use by analysts experienced in handling explosive materials. (USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majid, Z.A.; Mahmud, H.; Shaaban, M.G.
Stabilization/solidification of hazardous wastes is used to convert hazardous metal hydroxide waste sludge into a solid mass with better handling properties. This study investigated the pore size development of ordinary portland cement pastes containing metal hydroxide waste sludge and rice husk ash using mercury intrusion porosimetry. The effects of acre and the addition of rice husk ash on pore size development and strength were studied. It was found that the pore structures of mixes changed significantly with curing acre. The pore size shifted from 1,204 to 324 {angstrom} for 3-day old cement paste, and from 956 to 263 {angstrom} formore » a 7-day old sample. A reduction in pore size distribution for different curing ages was also observed in the other mixtures. From this limited study, no conclusion could be made as to any correlation between strength development and porosity. 10 refs., 6 figs., 3 tabs.« less
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Depression in Parents of Children Diagnosed with Autism Spectrum Disorder: A Claims-Based Analysis
ERIC Educational Resources Information Center
Cohrs, Austin C.; Leslie, Douglas L.
2017-01-01
Previous studies showing that Autism Spectrum Disorder (ASD) in children can have secondary effects on the child's parents are limited by small sample sizes and parent self-report. We examined the odds of depression in parents of children with ASD compared to parents of children without ASD using a large national claims database. Mothers (OR 2.95,…
Virtual reality gaming in the rehabilitation of the upper extremities post-stroke.
Yates, Michael; Kelemen, Arpad; Sik Lanyi, Cecilia
2016-01-01
Occurrences of strokes often result in unilateral upper limb dysfunction. Dysfunctions of this nature frequently persist and can present chronic limitations to activities of daily living. Research into applying virtual reality gaming systems to provide rehabilitation therapy have seen resurgence. Themes explored in stroke rehab for paretic limbs are action observation and imitation, versatility, intensity and repetition and preservation of gains. Fifteen articles were ultimately selected for review. The purpose of this literature review is to compare the various virtual reality gaming modalities in the current literature and ascertain their efficacy. The literature supports the use of virtual reality gaming rehab therapy as equivalent to traditional therapies or as successful augmentation to those therapies. While some degree of rigor was displayed in the literature, small sample sizes, variation in study lengths and therapy durations and unequal controls reduce generalizability and comparability. Future studies should incorporate larger sample sizes and post-intervention follow-up measures.
Composite outcomes in randomized clinical trials: arguments for and against.
Ross, Sue
2007-02-01
Composite outcomes that combine a number of individual outcomes (such as types of morbidity) are frequently used as primary outcomes in obstetrical trials. The main argument for their use is to ensure that trials can answer important clinical questions in a timely fashion, without needing huge sample sizes. Arguments against their use are that composite outcomes may be difficult to use and interpret, leading to errors in sample size estimation, possible contradictory trial results, and difficulty in interpreting findings. Such problems may reduce the credibility of the research, and may impact on the implementation of findings. Composite outcomes are an attractive solution to help to overcome the problem of limited available resources for clinical trials. However, future studies should carefully consider both the advantages and disadvantages before using composite outcomes. Rigorous development and reporting of composite outcomes is essential if the research is to be useful.
Incidental Lewy Body Disease: Clinical Comparison to a Control Cohort
Adler, Charles H.; Connor, Donald J.; Hentz, Joseph G.; Sabbagh, Marwan N.; Caviness, John N.; Shill, Holly A.; Noble, Brie; Beach, Thomas G.
2010-01-01
Limited clinical information has been published on cases pathologically diagnosed with incidental Lewy body disease (ILBD). Standardized, longitudinal movement and cognitive data was collected on a cohort of subjects enrolled in the Sun Health Research Institute Brain and Body Donation Program. Of 277 autopsied subjects who had antemortem clinical evaluations within the previous 3 years, 76 did not have Parkinson’s disease, a related disorder, or dementia of which 15 (20%) had ILBD. Minor extrapyramidal signs were common in subjects with and without ILBD. Cognitive testing revealed an abnormality in the ILBD group in the Trails B test only. ILBD cases had olfactory dysfunction; however, sample size was very small. This preliminary report revealed ILBD cases have movement and cognitive findings that for the most part were not out of proportion to similarly assessed and age-similar cases without Lewy bodies. Larger sample size is needed to have the power to better assess group differences. PMID:20175211
Thompson, William L.; Miller, Amy E.; Mortenson, Dorothy C.; Woodward, Andrea
2011-01-01
Monitoring natural resources in Alaskan national parks is challenging because of their remoteness, limited accessibility, and high sampling costs. We describe an iterative, three-phased process for developing sampling designs based on our efforts to establish a vegetation monitoring program in southwest Alaska. In the first phase, we defined a sampling frame based on land ownership and specific vegetated habitats within the park boundaries and used Path Distance analysis tools to create a GIS layer that delineated portions of each park that could be feasibly accessed for ground sampling. In the second phase, we used simulations based on landcover maps to identify size and configuration of the ground sampling units (single plots or grids of plots) and to refine areas to be potentially sampled. In the third phase, we used a second set of simulations to estimate sample size and sampling frequency required to have a reasonable chance of detecting a minimum trend in vegetation cover for a specified time period and level of statistical confidence. Results of the first set of simulations indicated that a spatially balanced random sample of single plots from the most common landcover types yielded the most efficient sampling scheme. Results of the second set of simulations were compared with field data and indicated that we should be able to detect at least a 25% change in vegetation attributes over 31. years by sampling 8 or more plots per year every five years in focal landcover types. This approach would be especially useful in situations where ground sampling is restricted by access.
Spatially explicit dynamic N-mixture models
Zhao, Qing; Royle, Andy; Boomer, G. Scott
2017-01-01
Knowledge of demographic parameters such as survival, reproduction, emigration, and immigration is essential to understand metapopulation dynamics. Traditionally the estimation of these demographic parameters requires intensive data from marked animals. The development of dynamic N-mixture models makes it possible to estimate demographic parameters from count data of unmarked animals, but the original dynamic N-mixture model does not distinguish emigration and immigration from survival and reproduction, limiting its ability to explain important metapopulation processes such as movement among local populations. In this study we developed a spatially explicit dynamic N-mixture model that estimates survival, reproduction, emigration, local population size, and detection probability from count data under the assumption that movement only occurs among adjacent habitat patches. Simulation studies showed that the inference of our model depends on detection probability, local population size, and the implementation of robust sampling design. Our model provides reliable estimates of survival, reproduction, and emigration when detection probability is high, regardless of local population size or the type of sampling design. When detection probability is low, however, our model only provides reliable estimates of survival, reproduction, and emigration when local population size is moderate to high and robust sampling design is used. A sensitivity analysis showed that our model is robust against the violation of the assumption that movement only occurs among adjacent habitat patches, suggesting wide applications of this model. Our model can be used to improve our understanding of metapopulation dynamics based on count data that are relatively easy to collect in many systems.
Eddy Covariance Measurements of the Sea-Spray Aerosol Flu
NASA Astrophysics Data System (ADS)
Brooks, I. M.; Norris, S. J.; Yelland, M. J.; Pascal, R. W.; Prytherch, J.
2015-12-01
Historically, almost all estimates of the sea-spray aerosol source flux have been inferred through various indirect methods. Direct estimates via eddy covariance have been attempted by only a handful of studies, most of which measured only the total number flux, or achieved rather coarse size segregation. Applying eddy covariance to the measurement of sea-spray fluxes is challenging: most instrumentation must be located in a laboratory space requiring long sample lines to an inlet collocated with a sonic anemometer; however, larger particles are easily lost to the walls of the sample line. Marine particle concentrations are generally low, requiring a high sample volume to achieve adequate statistics. The highly hygroscopic nature of sea salt means particles change size rapidly with fluctuations in relative humidity; this introduces an apparent bias in flux measurements if particles are sized at ambient humidity. The Compact Lightweight Aerosol Spectrometer Probe (CLASP) was developed specifically to make high rate measurements of aerosol size distributions for use in eddy covariance measurements, and the instrument and data processing and analysis techniques have been refined over the course of several projects. Here we will review some of the issues and limitations related to making eddy covariance measurements of the sea spray source flux over the open ocean, summarise some key results from the last decade, and present new results from a 3-year long ship-based measurement campaign as part of the WAGES project. Finally we will consider requirements for future progress.
Pyen, Grace S.; Browner, Richard F.; Long, Stephen
1986-01-01
A fixed-size simplex has been used to determine the optimum conditions for the simultaneous determination of arsenic, selenium, and antimony by hydride generation and inductively coupled plasma emission spectrometry. The variables selected for the simplex were carrier gas flow rate, rf power, viewing height, and reagent conditions. The detection limit for selenium was comparable to the preoptimized case, but there were twofold and fourfold improvements in the detection limits for arsenic and antimony, respectively. Precision of the technique was assessed with the use of artificially prepared water samples.
Díaz, Laura; Llorca-Pórcel, Julio; Valor, Ignacio
2008-08-22
A liquid chromatography-tandem mass spectrometry (LC-MS/MS)-based method for the detection of pesticides in tap and treated wastewater was developed and validated according to the ISO/IEC 17025:1999. Key features of this method include direct injection of 100 microL of sample, an 11 min separation by means of a rapid resolution liquid chromatography system with a 4.6 mm x 50 mm, 1.8 microm particle size reverse phase column and detection by electrospray ionization (ESI) MS-MS. The limits of detection were below 15 ng L(-1) and correlation coefficients for the calibration curves in the range of 30-2000 ng L(-1) were higher than 0.99. Precision was always below 20% and accuracy was confirmed by external evaluation. The main advantages of this method are direct injection of sample without preparative procedures and low limits of detection that fulfill the requirements established by the current European regulations governing pesticide detection.
Bacterial Presence in Layered Rock Varnish-Possible Mars Analog?
NASA Astrophysics Data System (ADS)
Krinsley, D.; Rusk, B. G.
2000-08-01
Rock varnish from locations in Death Valley, California; Peru; Antarctica; and Hawaii reveal nanometer scale layering (less than 1 nm to about 75 nm) when studied with transmission electron microscopy (TEM). Parallel layers of clay minerals containing evidence of presumed bacteria were present in all samples. Samples range in age from a few thousand years to perhaps a million years. Diagenesis is relatively limited, as chemical composition is variable, both from top to bottom and along layers in these varnish samples. Also, occasional exotic minerals occur randomly in most varnish sections, and vary in size and hardness, again suggesting relative lack of diagenetic alteration. Additional information can be found in the original extended abstract.
Scalable boson sampling with time-bin encoding using a loop-based architecture.
Motes, Keith R; Gilchrist, Alexei; Dowling, Jonathan P; Rohde, Peter P
2014-09-19
We present an architecture for arbitrarily scalable boson sampling using two nested fiber loops. The architecture has fixed experimental complexity, irrespective of the size of the desired interferometer, whose scale is limited only by fiber and switch loss rates. The architecture employs time-bin encoding, whereby the incident photons form a pulse train, which enters the loops. Dynamically controlled loop coupling ratios allow the construction of the arbitrary linear optics interferometers required for boson sampling. The architecture employs only a single point of interference and may thus be easier to stabilize than other approaches. The scheme has polynomial complexity and could be realized using demonstrated present-day technologies.
Water Oxidation Catalysis via Size-Selected Iridium Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halder, Avik; Liu, Cong; LIU, ZHUN
The detailed mechanism and efficacy of four electron electrochemical water oxidation depend critically upon the detailed atomic structure of each catalytic site, which are numerous and diverse in most metal oxides anodes. In order to limit the diversity of sites, arrays of discrete iridium clusters with identical metal atom number (Ir-2, Ir-4, or Ir-8) were deposited in submonolayer coverage on conductive oxide supports, and the electrochemical properties and activity of each was evaluated. Exceptional electroactivity for the oxygen evolving reaction (OER) was observed for all cluster samples in acidic electrolyte. Reproducible cluster-size-dependent trends in redox behavior were also resolved. First-principlesmore » computational models of the individual discrete-size clusters allow correlation of catalytic-site structure and multiplicity with redox behavior.« less
An information capacity limitation of visual short-term memory.
Sewell, David K; Lilburn, Simon D; Smith, Philip L
2014-12-01
Research suggests that visual short-term memory (VSTM) has both an item capacity, of around 4 items, and an information capacity. We characterize the information capacity limits of VSTM using a task in which observers discriminated the orientation of a single probed item in displays consisting of 1, 2, 3, or 4 orthogonally oriented Gabor patch stimuli that were presented in noise for 50 ms, 100 ms, 150 ms, or 200 ms. The observed capacity limitations are well described by a sample-size model, which predicts invariance of ∑(i)(d'(i))² for displays of different sizes and linearity of (d'(i))² for displays of different durations. Performance was the same for simultaneous and sequentially presented displays, which implicates VSTM as the locus of the observed invariance and rules out explanations that ascribe it to divided attention or stimulus encoding. The invariance of ∑(i)(d'(i))² is predicted by the competitive interaction theory of Smith and Sewell (2013), which attributes it to the normalization of VSTM traces strengths arising from competition among stimuli entering VSTM. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Optical limiting in suspension of detonation nanodiamonds in engine oil
NASA Astrophysics Data System (ADS)
Mikheev, Konstantin G.; Krivenkov, Roman Yu.; Mogileva, Tatyana N.; Puzyr, Alexey P.; Bondar, Vladimir S.; Bulatov, Denis L.; Mikheev, Gennady M.
2017-07-01
The optical limiting (OL) of detonation nanodiamond (DND) suspensions in engine oil was studied at a temperature range of 20°C to 100°C. Oil suspensions were prepared on the basis of the DNDs with an average nanoparticle cluster size in hydrosols (Daver) of 50 and 110 nm. Raman spectroscopy was used to characterize the samples. The OL investigation was carried out by the z-scan technique. The fundamental (1064 nm) and second (532 nm) harmonic radiations of YAG:Nd3+ laser with passive Q-switching as an excitation source were used. The OL thresholds for both suspensions at 532 and 1064 nm were determined. It is shown that a decrease in the average nanoparticle cluster size as well as an increase of the wavelength of the incident radiation leads to the OL threshold increase. It is established that the OL performance is not influenced by increasing the temperature from 20°C to 100°C. The results obtained show the possibility of using the DNDs suspensions in engine oil as an optical limiter in a wide temperature range.
Small renal size in newborns with spina bifida: possible causes.
Montaldo, Paolo; Montaldo, Luisa; Iossa, Azzurra Concetta; Cennamo, Marina; Caredda, Elisabetta; Del Gado, Roberto
2014-02-01
Previous studies reported that children with neural tube defects, but without any history of intrinsic renal diseases, have small kidneys when compared with age-matched standard renal growth. The aim of this study was to investigate the possible causes of small renal size in children with spina bifida by comparing growth hormone deficiency, physical limitations and hyperhomocysteinemia. The sample included 187 newborns with spina bifida. Renal sizes in the patients were assessed by using maximum measurement of renal length and the measurements were compared by using the Sutherland monogram. According to the results, the sample was divided into two groups--a group of 120 patients with small kidneys (under the third percentile) and a control group of 67 newborns with normal kidney size. Plasma total homocysteine was investigated in mothers and in their children. Serum insulin-like growth factor-1 (IGF-1) levels were measured. Serum IGF-1 levels were normal in both groups. Children and mothers with homocysteine levels >10 μmol/l were more than twice as likely to have small kidneys and to give to birth children with small kidneys, respectively, compared with newborns and mothers with homocysteine levels <10 μmol/l. An inverse correlation was also found between the homocysteine levels of mothers and kidney sizes of children (r = - 0.6109 P ≤ 0.01). It is highly important for mothers with hyperhomocysteinemia to be educated about benefits of folate supplementation in order to reduce the risk of small renal size and lower renal function in children.
Isermann, D.A.; Sammons, S.M.; Bettoli, P.W.; Churchill, T.N.
2002-01-01
We evaluated the potential effect of minimum size restrictions on crappies Pomoxis spp. in 12 large Tennessee reservoirs. A Beverton-Holt equilibrium yield model was used to predict and compare the response of these fisheries to three minimum size restrictions: 178 mm (i.e., pragmatically, no size limit), 229 mm, and the current statewide limit of 254 mm. The responses of crappie fisheries to size limits differed among reservoirs and varied with rates of conditional natural mortality (CM). Based on model results, crappie fisheries fell into one of three response categories: (1) In some reservoirs (N = 5), 254-mm and 229-mm limits would benefit the fishery in terms of yield if CM were low (30%); the associated declines in the number of crappies harvested would be significant but modest when compared with those in other reservoirs. (2) In other reservoirs (N = 6), little difference in yield existed among size restrictions at low to intermediate rates of CM (30-40%). In these reservoirs, a 229-mm limit was predicted to be a more beneficial regulation than the current 254-mm limit. (3) In the remaining reservoir, Tellico, size limits negatively affected all three harvest statistics. Generally, yield was negatively affected by size limits in all populations at a CM of 50%. The number of crappies reaching 300 mm was increased by size limits in most model scenarios: however, associated declines in the total number of crappies harvested often outweighed the benefits to size structure when CM was 40% or higher. When crappie growth was fast (reaching 254 mm in less than 3 years) and CM was low (30%), size limits were most effective in balancing increases in yield and size structure against declines in the total number of crappies harvested. The variability in predicted size-limit responses observed among Tennessee reservoirs suggests that using a categorical approach to applying size limits to crappie fisheries within a state or region would likely be a more effective management strategy than implementing a single, areawide regulation.
Intercalated Nanocomposites Based on High-Temperature Superconducting Ceramics and Their Properties
Tonoyan, Anahit; Schiсk, Christoph; Davtyan, Sevan
2009-01-01
High temperature superconducting (SC) nanocomposites based on SC ceramics and various polymeric binders were prepared. Regardless of the size of the ceramics’ grains, the increase of their amount leads to an increase of resistance to rupture and modulus and a decrease in limiting deformation, whereas an increase in the average ceramic grain size worsens resistance properties. The SC, thermo-chemical, mechanical and dynamic-mechanical properties of the samples were investigated. Superconducting properties of the polymer ceramic nanocomposites are explained by intercalation of macromolecule fragments into the interstitial layer of the ceramics’ grains. This phenomenon leads to a change in the morphological structure of the superconducting nanocomposites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, Rick
This manual summarizes the theory and preliminary verifications of the JacketSE module, which is an offshore jacket sizing tool that is part of the Wind-Plant Integrated System Design & Engineering Model toolbox. JacketSE is based on a finite-element formulation and on user-prescribed inputs and design standards' criteria (constraints). The physics are highly simplified, with a primary focus on satisfying ultimate limit states and modal performance requirements. Preliminary validation work included comparing industry data and verification against ANSYS, a commercial finite-element analysis package. The results are encouraging, and future improvements to the code are recommended in this manual.
Goulart, Lorena Athie; de Moraes, Fernando Cruz; Mascaro, Lucia Helena
2016-01-01
Different methods of functionalisation and the influence of the multi-walled carbon nanotube sizes were investigated on the bisphenol A electrochemical determination. Samples with diameters of 20 to 170 nmwere functionalized in HNO3 5.0 mol L(-1) and a concentrated sulphonitric solution. The morphological characterisations before and after acid treatment were carried out by scanning electron microscopy and cyclic voltammetry. The size and acid treatment affected the oxidation of bisphenol A. The multi-walled carbon nanotubes with a 20-40 nm diameter improved the method sensitivity and achieved a detection limit for determination of bisphenol A at 84.0 nmol L(-1).
Self-ion irradiation effects on mechanical properties of nanocrystalline zirconium films
Wang, Baoming; Haque, M. A.; Tomar, Vikas; ...
2017-07-13
Zirconium thin films were irradiated at room temperature with an 800 keV Zr + beam using a 6 MV HVE Tandem accelerator to 1.36 displacement per atom damage. Freestanding tensile specimens, 100 nm thick and 10 nm grain size, were tested in-situ inside a transmission electron microscope. Significant grain growth (>300%), texture evolution, and displacement damage defects were observed. Here, stress-strain profiles were mostly linear elastic below 20 nm grain size, but above this limit the samples demonstrated yielding and strain hardening. Experimental results support the hypothesis that grain boundaries in nanocrystalline metals act as very effective defect sinks.
Development and experimental study of large size composite plasma immersion ion implantation device
NASA Astrophysics Data System (ADS)
Falun, SONG; Fei, LI; Mingdong, ZHU; Langping, WANG; Beizhen, ZHANG; Haitao, GONG; Yanqing, GAN; Xiao, JIN
2018-01-01
Plasma immersion ion implantation (PIII) overcomes the direct exposure limit of traditional beam-line ion implantation, and is suitable for the treatment of complex work-piece with large size. PIII technology is often used for surface modification of metal, plastics and ceramics. Based on the requirement of surface modification of large size insulating material, a composite full-directional PIII device based on RF plasma source and metal plasma source is developed in this paper. This device can not only realize gas ion implantation, but also can realize metal ion implantation, and can also realize gas ion mixing with metal ions injection. This device has two metal plasma sources and each metal source contains three cathodes. Under the condition of keeping the vacuum unchanged, the cathode can be switched freely. The volume of the vacuum chamber is about 0.94 m3, and maximum vacuum degree is about 5 × 10-4 Pa. The density of RF plasma in homogeneous region is about 109 cm-3, and plasma density in the ion implantation region is about 1010 cm-3. This device can be used for large-size sample material PIII treatment, the maximum size of the sample diameter up to 400 mm. The experimental results show that the plasma discharge in the device is stable and can run for a long time. It is suitable for surface treatment of insulating materials.
Sandstrom, Mark W.; Wydoski, Duane S.; Schroeder, Michael P.; Zamboni, Jana L.; Foreman, William T.
1992-01-01
A method for the isolation of organonitrogen herbicides from natural water samples using solid-phase extraction and analysis by capillary-column gas chromatography/mass spectrometry with selected-ion monitoring is described. Water samples are filtered to remove suspended particulate matter and then are pumped through disposable solid-phase extraction cartridges containing octadecyl-bonded porous silica to remove the herbicides. The cartridges are dried using carbon dioxide, and adsorbed herbicides are removed from the cartridges by elution with 1.8 milliliters of hexaneisopropanol (3:1). Extracts of the eluants are analyzed by capillary-column gas chromatography/mass spectrometry with selected-ion monitoring of at least three characteristic ions. The method detection limits are dependent on sample matrix and each particular herbicide. The method detection limits, based on a 100-milliliter sample size, range from 0.02 to 0.25 microgram per liter. Recoveries averaged 80 to 115 percent for the 23 herbicides and 2 metabolites in 1 reagent-water and 2 natural-water samples fortified at levels of 0.2 and 2.0 micrograms per liter.
Benson, Tony; Lavelle, Fiona; Bucher, Tamara; McCloat, Amanda; Mooney, Elaine; Egan, Bernadette; Collins, Clare E; Dean, Moira
2018-05-22
Nutrition and health claims on foods can help consumers make healthier food choices. However, claims may have a 'halo' effect, influencing consumer perceptions of foods and increasing consumption. Evidence for these effects are typically demonstrated in experiments with small samples, limiting generalisability. The current study aimed to overcome this limitation through the use of a nationally representative survey. In a cross-sectional survey of 1039 adults across the island of Ireland, respondents were presented with three different claims (nutrition claim = "Low in fat"; health claim = "With plant sterols. Proven to lower cholesterol"; satiety claim = "Fuller for longer") on four different foods (cereal, soup, lasagne, and yoghurt). Participants answered questions on perceived healthiness, tastiness, and fillingness of the products with different claims and also selected a portion size they would consume. Claims influenced fillingness perceptions of some of the foods. However, there was little influence of claims on tastiness or healthiness perceptions or the portion size selected. Psychological factors such as consumers' familiarity with foods carrying claims and belief in the claims were the most consistent predictors of perceptions and portion size selection. Future research should identify additional consumer factors that may moderate the relationships between claims, perceptions, and consumption.
Benson, Tony; Lavelle, Fiona; McCloat, Amanda; Mooney, Elaine; Egan, Bernadette; Collins, Clare E.; Dean, Moira
2018-01-01
Nutrition and health claims on foods can help consumers make healthier food choices. However, claims may have a ‘halo’ effect, influencing consumer perceptions of foods and increasing consumption. Evidence for these effects are typically demonstrated in experiments with small samples, limiting generalisability. The current study aimed to overcome this limitation through the use of a nationally representative survey. In a cross-sectional survey of 1039 adults across the island of Ireland, respondents were presented with three different claims (nutrition claim = “Low in fat”; health claim = “With plant sterols. Proven to lower cholesterol”; satiety claim = “Fuller for longer”) on four different foods (cereal, soup, lasagne, and yoghurt). Participants answered questions on perceived healthiness, tastiness, and fillingness of the products with different claims and also selected a portion size they would consume. Claims influenced fillingness perceptions of some of the foods. However, there was little influence of claims on tastiness or healthiness perceptions or the portion size selected. Psychological factors such as consumers’ familiarity with foods carrying claims and belief in the claims were the most consistent predictors of perceptions and portion size selection. Future research should identify additional consumer factors that may moderate the relationships between claims, perceptions, and consumption. PMID:29789472
NASA Astrophysics Data System (ADS)
Yamada, T.; Ide, S.
2007-12-01
Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.
Optical design considerations when imaging the fundus with an adaptive optics correction
NASA Astrophysics Data System (ADS)
Wang, Weiwei; Campbell, Melanie C. W.; Kisilak, Marsha L.; Boyd, Shelley R.
2008-06-01
Adaptive Optics (AO) technology has been used in confocal scanning laser ophthalmoscopes (CSLO) which are analogous to confocal scanning laser microscopes (CSLM) with advantages of real-time imaging, increased image contrast, a resistance to image degradation by scattered light, and improved optical sectioning. With AO, the instrumenteye system can have low enough aberrations for the optical quality to be limited primarily by diffraction. Diffraction-limited, high resolution imaging would be beneficial in the understanding and early detection of eye diseases such as diabetic retinopathy. However, to maintain diffraction-limited imaging, sufficient pixel sampling over the field of view is required, resulting in the need for increased data acquisition rates for larger fields. Imaging over smaller fields may be a disadvantage with clinical subjects because of fixation instability and the need to examine larger areas of the retina. Reduction in field size also reduces the amount of light sampled per pixel, increasing photon noise. For these reasons, we considered an instrument design with a larger field of view. When choosing scanners to be used in an AOCSLO, the ideal frame rate should be above the flicker fusion rate for the human observer and would also allow user control of targets projected onto the retina. In our AOCSLO design, we have studied the tradeoffs between field size, frame rate and factors affecting resolution. We will outline optical approaches to overcome some of these tradeoffs and still allow detection of the earliest changes in the fundus in diabetic retinopathy.
Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, J. Allen, E-mail: davis.allen@epa.gov; Gift, Jeffrey S.; Zhao, Q. Jay
2011-07-15
Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addressesmore » many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.« less
Topiramate in the treatment of substance related disorders: a critical review of the literature
Shinn, Ann K.; Greenfield, Shelly F.
2013-01-01
Objective To critically review the literature on topiramate in the treatment of substance related disorders. Data Sources A PubMed search of human studies published in English through January 2009. Study Selection 26 articles were identified and reviewed; these studies examined topiramate in disorders related to alcohol, nicotine, cocaine, methamphetamine, opioids, ecstasy, and benzodiazepines. Data Extraction Study design, sample size, topiramate dose and duration, and study outcomes were reviewed. Data Synthesis There is compelling evidence for the efficacy of topiramate in the treatment of alcohol dependence. Two trials show trends for topiramate’s superiority over oral naltrexone in alcohol dependence, while one trial suggests topiramate is inferior to disulfiram. Despite suggestive animal models, evidence for topiramate in treating alcohol withdrawal in humans is slim. Studies of topiramate in nicotine dependence show mixed results. Human laboratory studies that used acute topiramate dosing show that topiramate actually enhances the pleasurable effects of both nicotine and methamphetamine. Evidence for topiramate in the treatment of cocaine dependence is promising, but limited by small sample size. The data on opioids, benzodiazepines, and ecstasy are sparse. Conclusion Topiramate is efficacious for the treatment of alcohol dependence, but side effects may limit widespread use. While topiramate’s unique pharmacodynamic profile offers a promising theoretical rationale for use across multiple substance related disorders, heterogeneity both across and within these disorders limits topiramate’s broad applicability in treating substance related disorders. Recommendations for future research include exploration of genetic variants for more targeted pharmacotherapies. PMID:20361908
Kim, Eun Hye; Lee, Hwan Young; Yang, In Seok; Jung, Sang-Eun; Yang, Woo Ick; Shin, Kyoung-Jin
2016-05-01
The next-generation sequencing (NGS) method has been utilized to analyze short tandem repeat (STR) markers, which are routinely used for human identification purposes in the forensic field. Some researchers have demonstrated the successful application of the NGS system to STR typing, suggesting that NGS technology may be an alternative or additional method to overcome limitations of capillary electrophoresis (CE)-based STR profiling. However, there has been no available multiplex PCR system that is optimized for NGS analysis of forensic STR markers. Thus, we constructed a multiplex PCR system for the NGS analysis of 18 markers (13CODIS STRs, D2S1338, D19S433, Penta D, Penta E and amelogenin) by designing amplicons in the size range of 77-210 base pairs. Then, PCR products were generated from two single-sources, mixed samples and artificially degraded DNA samples using a multiplex PCR system, and were prepared for sequencing on the MiSeq system through construction of a subsequent barcoded library. By performing NGS and analyzing the data, we confirmed that the resultant STR genotypes were consistent with those of CE-based typing. Moreover, sequence variations were detected in targeted STR regions. Through the use of small-sized amplicons, the developed multiplex PCR system enables researchers to obtain successful STR profiles even from artificially degraded DNA as well as STR loci which are analyzed with large-sized amplicons in the CE-based commercial kits. In addition, successful profiles can be obtained from mixtures up to a 1:19 ratio. Consequently, the developed multiplex PCR system, which produces small size amplicons, can be successfully applied to STR NGS analysis of forensic casework samples such as mixtures and degraded DNA samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Czuba, Jonathan A.; Straub, Timothy D.; Curran, Christopher A.; Landers, Mark N.; Domanski, Marian M.
2015-01-01
Laser-diffraction technology, recently adapted for in-stream measurement of fluvial suspended-sediment concentrations (SSCs) and particle-size distributions (PSDs), was tested with a streamlined (SL), isokinetic version of the Laser In-Situ Scattering and Transmissometry (LISST) for measuring volumetric SSCs and PSDs ranging from 1.8-415 µm in 32 log-spaced size classes. Measured SSCs and PSDs from the LISST-SL were compared to a suite of 22 datasets (262 samples in all) of concurrent suspended-sediment and streamflow measurements using a physical sampler and acoustic Doppler current profiler collected during 2010-12 at 16 U.S. Geological Survey streamflow-gaging stations in Illinois and Washington (basin areas: 38 – 69,264 km2). An unrealistically low computed effective density (mass SSC / volumetric SSC) of 1.24 g/ml (95% confidence interval: 1.05-1.45 g/ml) provided the best-fit value (R2 = 0.95; RMSE = 143 mg/L) for converting volumetric SSC to mass SSC for over 2 orders of magnitude of SSC (12-2,170 mg/L; covering a substantial range of SSC that can be measured by the LISST-SL) despite being substantially lower than the sediment particle density of 2.67 g/ml (range: 2.56-2.87 g/ml, 23 samples). The PSDs measured by the LISST-SL were in good agreement with those derived from physical samples over the LISST-SL's measureable size range. Technical and operational limitations of the LISST-SL are provided to facilitate the collection of more accurate data in the future. Additionally, the spatial and temporal variability of SSC and PSD measured by the LISST-SL is briefly described to motivate its potential for advancing our understanding of suspended-sediment transport by rivers.
An In Situ Method for Sizing Insoluble Residues in Precipitation and Other Aqueous Samples
Axson, Jessica L.; Creamean, Jessie M.; Bondy, Amy L.; Capracotta, Sonja S.; Warner, Katy Y.; Ault, Andrew P.
2015-01-01
Particles are frequently incorporated into clouds or precipitation, influencing climate by acting as cloud condensation or ice nuclei, taking up coatings during cloud processing, and removing species through wet deposition. Many of these particles, particularly ice nuclei, can remain suspended within cloud droplets/crystals as insoluble residues. While previous studies have measured the soluble or bulk mass of species within clouds and precipitation, no studies to date have determined the number concentration and size distribution of insoluble residues in precipitation or cloud water using in situ methods. Herein, for the first time we demonstrate that Nanoparticle Tracking Analysis (NTA) is a powerful in situ method for determining the total number concentration, number size distribution, and surface area distribution of insoluble residues in precipitation, both of rain and melted snow. The method uses 500 μL or less of liquid sample and does not require sample modification. Number concentrations for the insoluble residues in aqueous precipitation samples ranged from 2.0–3.0(±0.3)×108 particles cm−3, while surface area ranged from 1.8(±0.7)–3.2(±1.0)×107 μm2 cm−3. Number size distributions peaked between 133–150 nm, with both single and multi-modal character, while surface area distributions peaked between 173–270 nm. Comparison with electron microscopy of particles up to 10 μm show that, by number, > 97% residues are <1 μm in diameter, the upper limit of the NTA. The range of concentration and distribution properties indicates that insoluble residue properties vary with ambient aerosol concentrations, cloud microphysics, and meteorological dynamics. NTA has great potential for studying the role that insoluble residues play in critical atmospheric processes. PMID:25705069
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F.-M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-07-01
The principles of the Polarization and Directionality of the Earth's Reflectance (POLDER) cloud droplet size retrieval requires that clouds are horizontally homogeneous. Nevertheless, the retrieval is applied by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using the POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval, and then analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-scale variability in droplet effective radius (CDR) can mislead both the CDR and effective variance (EV) retrievals. Nevertheless, the sub-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval is accurate using limited observations and is largely independent of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, the measurements in the primary rainbow region (137-145°) are used to ensure accurate large droplet (> 15 μm) retrievals and reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data for June 2008, the new CDR results are compared with the operational CDRs. The comparison show that the operational CDRs tend to be underestimated for large droplets. The reason is that the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Lastly, a sub-scale retrieval case is analyzed, illustrating that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size parameters from POLDER measurements.
Viegas, Carla; Faria, Tiago; Monteiro, Ana; Caetano, Liliana Aranha; Carolino, Elisabete; Quintal Gomes, Anita; Viegas, Susana
2017-12-27
Swine production has been associated with health risks and workers' symptoms. In Portugal, as in other countries, large-scale swine production involves several activities in the swine environment that require direct intervention, increasing workers' exposure to organic dust. This study describes an updated protocol for the assessment of occupational exposure to organic dust, to unveil an accurate scenario regarding occupational and environmental risks for workers' health. The particle size distribution was characterized regarding mass concentration in five different size ranges (PM0.5, PM1, PM2.5, PM5, PM10). Bioburden was assessed, by both active and passive sampling methods, in air, on surfaces, floor covering and feed samples, and analyzed through culture based-methods and qPCR. Smaller size range particles exhibited the highest counts, with indoor particles showing higher particle counts and mass concentration than outdoor particles. The limit values suggested for total bacteria load were surpassed in 35.7% (10 out of 28) of samples and for fungi in 65.5% (19 out of 29) of samples. Among Aspergillus genera, section Circumdati was the most prevalent (55%) on malt extract agar (MEA) and Versicolores the most identified (50%) on dichloran glycerol (DG18). The results document a wide characterization of occupational exposure to organic dust on swine farms, being useful for policies and stakeholders to act to improve workers' safety. The methods of sampling and analysis employed were the most suitable considering the purpose of the study and should be adopted as a protocol to be followed in future exposure assessments in this occupational environment.
A new facility for non-destructive assay using a 252Cf source.
Stevanato, L; Caldogno, M; Dima, R; Fabris, D; Hao, Xin; Lunardon, M; Moretto, S; Nebbia, G; Pesente, S; Pino, F; Sajo-Bohus, L; Viesti, G
2013-03-01
A new laboratory facility for non-destructive analysis (NDA) using a time-tagged (252)Cf source is presented. The system is designed to analyze samples having maximum size of about 20 × 25 cm(2), the material recognition being obtained by measuring simultaneously total and energy dependent transmission of neutrons and gamma rays. The equipment technical characteristics and performances of the NDA system are presented, exploring also limits due to the sample thickness. Some recent applications in the field of cultural heritage are presented. Copyright © 2012 Elsevier Ltd. All rights reserved.
Machine Learning for Big Data: A Study to Understand Limits at Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R.; Del-Castillo-Negrete, Carlos Emilio
This report aims to empirically understand the limits of machine learning when applied to Big Data. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical data mining and machine learning under more scrutiny, evaluation and application for gleaning insights from the data than ever before. Much is expected from algorithms without understanding their limitations at scale while dealing with massive datasets. In that context, we pose and address the following questions How does a machine learning algorithm perform on measuresmore » such as accuracy and execution time with increasing sample size and feature dimensionality? Does training with more samples guarantee better accuracy? How many features to compute for a given problem? Do more features guarantee better accuracy? Do efforts to derive and calculate more features and train on larger samples worth the effort? As problems become more complex and traditional binary classification algorithms are replaced with multi-task, multi-class categorization algorithms do parallel learners perform better? What happens to the accuracy of the learning algorithm when trained to categorize multiple classes within the same feature space? Towards finding answers to these questions, we describe the design of an empirical study and present the results. We conclude with the following observations (i) accuracy of the learning algorithm increases with increasing sample size but saturates at a point, beyond which more samples do not contribute to better accuracy/learning, (ii) the richness of the feature space dictates performance - both accuracy and training time, (iii) increased dimensionality often reflected in better performance (higher accuracy in spite of longer training times) but the improvements are not commensurate the efforts for feature computation and training and (iv) accuracy of the learning algorithms drop significantly with multi-class learners training on the same feature matrix and (v) learning algorithms perform well when categories in labeled data are independent (i.e., no relationship or hierarchy exists among categories).« less
Comparison of Methods for Analyzing Left-Censored Occupational Exposure Data
Huynh, Tran; Ramachandran, Gurumurthy; Banerjee, Sudipto; Monteiro, Joao; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.
2014-01-01
The National Institute for Environmental Health Sciences (NIEHS) is conducting an epidemiologic study (GuLF STUDY) to investigate the health of the workers and volunteers who participated from April to December of 2010 in the response and cleanup of the oil release after the Deepwater Horizon explosion in the Gulf of Mexico. The exposure assessment component of the study involves analyzing thousands of personal monitoring measurements that were collected during this effort. A substantial portion of these data has values reported by the analytic laboratories to be below the limits of detection (LOD). A simulation study was conducted to evaluate three established methods for analyzing data with censored observations to estimate the arithmetic mean (AM), geometric mean (GM), geometric standard deviation (GSD), and the 95th percentile (X0.95) of the exposure distribution: the maximum likelihood (ML) estimation, the β-substitution, and the Kaplan–Meier (K-M) methods. Each method was challenged with computer-generated exposure datasets drawn from lognormal and mixed lognormal distributions with sample sizes (N) varying from 5 to 100, GSDs ranging from 2 to 5, and censoring levels ranging from 10 to 90%, with single and multiple LODs. Using relative bias and relative root mean squared error (rMSE) as the evaluation metrics, the β-substitution method generally performed as well or better than the ML and K-M methods in most simulated lognormal and mixed lognormal distribution conditions. The ML method was suitable for large sample sizes (N ≥ 30) up to 80% censoring for lognormal distributions with small variability (GSD = 2–3). The K-M method generally provided accurate estimates of the AM when the censoring was <50% for lognormal and mixed distributions. The accuracy and precision of all methods decreased under high variability (GSD = 4 and 5) and small to moderate sample sizes (N < 20) but the β-substitution was still the best of the three methods. When using the ML method, practitioners are cautioned to be aware of different ways of estimating the AM as they could lead to biased interpretation. A limitation of the β-substitution method is the absence of a confidence interval for the estimate. More research is needed to develop methods that could improve the estimation accuracy for small sample sizes and high percent censored data and also provide uncertainty intervals. PMID:25261453
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Cryo-tomography Tilt-series Alignment with Consideration of the Beam-induced Sample Motion
Fernandez, Jose-Jesus; Li, Sam; Bharat, Tanmay A. M.; Agard, David A.
2018-01-01
Recent evidence suggests that the beam-induced motion of the sample during tilt-series acquisition is a major resolution-limiting factor in electron cryo-tomography (cryoET). It causes suboptimal tilt-series alignment and thus deterioration of the reconstruction quality. Here we present a novel approach to tilt-series alignment and tomographic reconstruction that considers the beam-induced sample motion through the tilt-series. It extends the standard fiducial-based alignment approach in cryoET by introducing quadratic polynomials to model the sample motion. The model can be used during reconstruction to yield a motion-compensated tomogram. We evaluated our method on various datasets with different sample sizes. The results demonstrate that our method could be a useful tool to improve the quality of tomograms and the resolution in cryoET. PMID:29410148
Instability improvement of the subgrade soils by lime addition at Borg El-Arab, Alexandria, Egypt
NASA Astrophysics Data System (ADS)
El Shinawi, A.
2017-06-01
Subgrade soils can affect the stability of any construction elsewhere, instability problems were found at Borg El-Arab, Alexandria, Egypt. This paper investigates geoengineering properties of lime treated subgrade soils at Borg El-Arab. Basic laboratory tests, such as water content, wet and dry density, grain size, specific gravity and Atterberg limits, were performed for twenty-five samples. Moisture-density (compaction); California Bearing Ratio (CBR) and Unconfined Compression Strength (UCS) were conducted on treated and natural soils. The measured geotechnical parameters of the treated soil shows that 6% lime is good enough to stabilize the subgrade soils. It was found that by adding lime, samples shifted to coarser side, Atterberg limits values of the treated soil samples decreased and this will improve the soil to be more stable. On the other hand, Subgrade soils improved as a result of the bonding fine particles, cemented together to form larger size and reduce the plastiCity index which increase soils strength. The environmental scanning electron microscope (ESEM) is point to the presence of innovative aggregated cement materials which reduce the porosity and increase the strength as a long-term curing. Consequently, the mixture of soil with the lime has acceptable mechanical characteristics where, it composed of a high strength base or sub-base materials and this mixture considered as subgrade soil for stabilizations and mitigation the instability problems that found at Borg Al-Arab, Egypt.
Yang, Zheng; Hou, Xiandeng; Jones, Bradley T
2003-03-10
A simple, particle size-independent spectrometric method has been developed for the multi-element determination of wear metals in used engine oil. A small aliquot (0.5 ml) of an acid-digested oil sample is spotted onto a C-18 solid phase extraction disk to form a uniform thin film. The dried disk is then analyzed directly by energy dispersive X-ray fluorescence spectrometry. This technique provides a homogeneous and reproducible sample surface to the instrument, thus overcoming the typical problems associated with uneven particle size distribution and sedimentation. As a result, the method provides higher precision and accuracy than conventional methods. Furthermore, the disk sample may be stored and re-analyzed or extracted at a later date. The signals arising from the spotted disks, and the calibration curves constructed from them, are stable for at least 2 months. The limits of detection for Fe, Cu, Zn, Pb, and Cr are 5, 1, 4, 2, and 4 microg g(-1), respectively. Recoveries of these elements from spiked oil samples range from 92 to 110%. The analysis of two standard reference materials and a used oil sample produced results comparable to those found by inductively coupled plasma atomic emission spectrometry.
Porosity, permeability and 3D fracture network characterisation of dolomite reservoir rock samples
Voorn, Maarten; Exner, Ulrike; Barnhoorn, Auke; Baud, Patrick; Reuschlé, Thierry
2015-01-01
With fractured rocks making up an important part of hydrocarbon reservoirs worldwide, detailed analysis of fractures and fracture networks is essential. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) however suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this paper, we therefore explore the use of an additional method – non-destructive 3D X-ray micro-Computed Tomography (μCT) – to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. We process the 3D μCT data in this study by a Hessian-based fracture filtering routine and can successfully extract porosity, fracture aperture, fracture density and fracture orientations – in bulk as well as locally. Additionally, thin sections made from selected plug samples provide 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) towards more realistic reservoir conditions. This study shows that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that although there are limitations, several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other methods can therefore be a powerful approach in microstructural analysis of reservoir rocks, especially when applying the concepts that we present (on a small set of samples) in a larger study, in an automated and standardised manner. PMID:26549935
Porosity, permeability and 3D fracture network characterisation of dolomite reservoir rock samples.
Voorn, Maarten; Exner, Ulrike; Barnhoorn, Auke; Baud, Patrick; Reuschlé, Thierry
2015-03-01
With fractured rocks making up an important part of hydrocarbon reservoirs worldwide, detailed analysis of fractures and fracture networks is essential. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) however suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this paper, we therefore explore the use of an additional method - non-destructive 3D X-ray micro-Computed Tomography (μCT) - to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. We process the 3D μCT data in this study by a Hessian-based fracture filtering routine and can successfully extract porosity, fracture aperture, fracture density and fracture orientations - in bulk as well as locally. Additionally, thin sections made from selected plug samples provide 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) towards more realistic reservoir conditions. This study shows that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that although there are limitations, several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other methods can therefore be a powerful approach in microstructural analysis of reservoir rocks, especially when applying the concepts that we present (on a small set of samples) in a larger study, in an automated and standardised manner.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Life history dependent morphometric variation in stream-dwelling Atlantic salmon
Letcher, B.H.
2003-01-01
The time course of morphometric variation among life histories for stream-dwelling Atlantic salmon (Salmo salar L.) parr (age-0+ to age-2+) was analyzed. Possible life histories were combinations of parr maturity status in the autumn (mature or immature) and age at outmigration (smolt at age-2+ or later age). Actual life histories expressed with enough fish for analysis in the 1997 cohort were immature/age-2+ smolt, mature/age-2 +smolt, and mature/age-2+ non-smolt. Tagged fish were assigned to one of the three life histories and digital pictures from the field were analyzed using landmark-based geometric morphometrics. Results indicated that successful grouping of fish according to life history varied with fish age, but that fish could be grouped before the actual expression of the life histories. By March (age-1+), fish were successfully grouped using a descriptive discriminant function and successful assignment ranged from 84 to 97% for the remainder of stream residence. A jackknife of the discriminant function revealed an average life history prediction success of 67% from age-1+ summer to smolting. Low sample numbers for one of the life histories may have limited prediction success. A MANOVA on the shape descriptors (relative warps) also indicated significant differences in shape among life histories from age-1+ summer through to smolting. Across all samples, shape varied significantly with size. Within samples, shape did not vary significantly with size for samples from December (age-0+) to May (age-1+). During the age-1+ summer however, shape varied significantly with size, but the relationship between shape and size was not different among life histories. In the autumn (age-1+) and winter (age-2+), life history differences explained a significant portion of the change in shape with size. Life history dependent morphometric variation may be useful to indicate the timing of early expressions of life history variation and as a tool to explore temporal and spatial variation in life history expression.
Laser-induced breakdown spectroscopy for detection of heavy metals in environmental samples
NASA Astrophysics Data System (ADS)
Wisbrun, Richard W.; Schechter, Israel; Niessner, Reinhard; Schroeder, Hartmut
1993-03-01
The application of LIBS technology as a sensor for heavy metals in solid environmental samples has been studied. This specific application introduces some new problems in the LIBS analysis. Some of them are related to the particular distribution of contaminants in the grained samples. Other problems are related to mechanical properties of the samples and to general matrix effects, like the water and organic fibers content of the sample. An attempt has been made to optimize the experimental set-up for the various involved parameters. The understanding of these factors has enabled the adjustment of the technique to the substrates of interest. The special importance of the grain size and of the laser-induced aerosol production is pointed out. Calibration plots for the analysis of heavy metals in diverse sand and soil samples have been carried out. The detection limits are shown to be usually below the recent regulation restricted concentrations.
Wang, Chuji; Pan, Yong-Le; James, Deryck; Wetmore, Alan E; Redding, Brandon
2014-04-11
We report a novel atmospheric aerosol characterization technique, in which dual wavelength UV laser induced fluorescence (LIF) spectrometry marries an eight-stage rotating drum impactor (RDI), namely UV-LIF-RDI, to achieve size- and time-resolved analysis of aerosol particles on-strip. The UV-LIF-RDI technique measured LIF spectra via direct laser beam illumination onto the particles that were impacted on a RDI strip with a spatial resolution of 1.2mm, equivalent to an averaged time resolution in the aerosol sampling of 3.6 h. Excited by a 263 nm or 351 nm laser, more than 2000 LIF spectra within a 3-week aerosol collection time period were obtained from the eight individual RDI strips that collected particles in eight different sizes ranging from 0.09 to 10 μm in Djibouti. Based on the known fluorescence database from atmospheric aerosols in the US, the LIF spectra obtained from the Djibouti aerosol samples were found to be dominated by fluorescence clusters 2, 5, and 8 (peaked at 330, 370, and 475 nm) when excited at 263 nm and by fluorescence clusters 1, 2, 5, and 6 (peaked at 390 and 460 nm) when excited at 351 nm. Size- and time-dependent variations of the fluorescence spectra revealed some size and time evolution behavior of organic and biological aerosols from the atmosphere in Djibouti. Moreover, this analytical technique could locate the possible sources and chemical compositions contributing to these fluorescence clusters. Advantages, limitations, and future developments of this new aerosol analysis technique are also discussed. Published by Elsevier B.V.
An Open-Source Storage Solution for Cryo-Electron Microscopy Samples.
Ultee, Eveline; Schenkel, Fred; Yang, Wen; Brenzinger, Susanne; Depelteau, Jamie S; Briegel, Ariane
2018-02-01
Cryo-electron microscopy (cryo-EM) enables the study of biological structures in situ in great detail and to solve protein structures at Ångstrom level resolution. Due to recent advances in instrumentation and data processing, the field of cryo-EM is a rapidly growing. Access to facilities and national centers that house the state-of-the-art microscopes is limited due to the ever-rising demand, resulting in long wait times between sample preparation and data acquisition. To improve sample storage, we have developed a cryo-storage system with an efficient, high storage capacity that enables sample storage in a highly organized manner. This system is simple to use, cost-effective and easily adaptable for any type of grid storage box and dewar and any size cryo-EM laboratory.
Cleaning of nanopillar templates for nanoparticle collection using PDMS
NASA Astrophysics Data System (ADS)
Merzsch, S.; Wasisto, H. S.; Waag, A.; Kirsch, I.; Uhde, E.; Salthammer, T.; Peiner, E.
2011-05-01
Nanoparticles are easily attracted by surfaces. This sticking behavior makes it difficult to clean contaminated samples. Some complex approaches have already shown efficiencies in the range of 90%. However, a simple and cost efficient method was still missing. A commonly used silicone for soft lithography, PDMS, is able to mold a given surface. This property was used to cover surface-bonded particles from all other sides. After hardening the PDMS, particles are still embedded. A separation of silicone and sample disjoins also the particles from the surface. After this procedure, samples are clean again. This method was first tested with carbon particles on Si surfaces and Si pillar samples with aspect ratios up to 10. Experiments were done using 2 inch wafers, which, however, is not a size limitation for this method.
NASA Astrophysics Data System (ADS)
Yonatan Mulushoa, S.; Murali, N.; Tulu Wegayehu, M.; Margarette, S. J.; Samatha, K.
2018-03-01
Cu-Cr substituted magnesium ferrite materials (Mg1 - xCuxCrxFe21 - xO4 with x = 0.0-0.7) have been synthesized by the solid state reaction method. XRD analysis revealed the prepared samples are cubic spinel with single phase face centered cubic. A significant decrease of ∼41.15 nm in particle size is noted in response to the increase in Cu-Cr substitution level. The room temperature resistivity increases gradually from 0.553 × 105 Ω cm (x = 0.0) to 0.105 × 108 Ω cm (x = 0.7). Temperature dependent DC-electrical resistivity of all the samples, exhibits semiconductor like behavior. Cu-Cr doped materials can be suitable to limit the eddy current losses. VSM result shows pure and doped magnesium ferrite particles show soft ferrimagnetic nature at room temperature. The saturation magnetization of the samples decreases initially from 34.5214 emu/g for x = 0.0 to 18.98 emu/g (x = 0.7). Saturation magnetization, remanence and coercivity are decreased with doping, which may be due to the increase in grain size.
Low field magnetocaloric effect in bulk and ribbon alloy La(Fe0.88Si0.12)13
NASA Astrophysics Data System (ADS)
Vuong, Van-Hiep; Do-Thi, Kim-Anh; Nguyen, Duy-Thien; Nguyen, Quang-Hoa; Hoang, Nam-Nhat
2018-03-01
Low-field magnetocaloric effect occurred in itinerant metamagnetic materials is at core for magnetic cooling application. This works reports the magnetocaloric responses obtained at 1.35 T for the silicon-doped iron-based binary alloy La(Fe0.88Si0.12)13 in the bulk and ribbon form. Both samples possess a same symmetry but with different crystallite sizes and lattice parameters. The ribbon sample shows a larger maximum entropy change (nearly 8.5 times larger) and a higher Curie temperature (5 K higher) in comparison with that of the bulk sample. The obtained relative cooling power for the ribbon is also larger and very promising for application (RCP = 153 J/kg versus 25.2 J/kg for the bulk). The origin of the effect observed is assigned to the occurrence of negative magnetovolume effect in the ribbon structure with limit crystallization, caused by rapid cooling process at the preparation, which induced smaller crystallite size and large lattice constant at the overall weaker local crystal field.
The social class gradient in health in Spain and the health status of the Spanish Roma.
La Parra Casado, Daniel; Gil González, Diana; de la Torre Esteve, María
2016-10-01
To determine the social class gradient in health in general Spain population and the health status of the Spanish Roma. The National Health Survey of Spanish Roma 2006 (sample size = 993 people; average age: 33.6 years; 53.1% women) and the National Health Surveys for Spain 2003 (sample size: 21,650 people; average age: 45.5 years; 51.2% women) and 2006 (sample size: 29,478 people; average age: 46 years; 50.7% women) are compared. Several indicators were chosen: self-perceived health, activity limitation, chronic diseases, hearing and sight problems, caries, and obesity. Analysis was based on age-standardised rates and logistic regression models. According to most indicators, Roma's health is worse than that of social class IV-V (manual workers). Some indicators show a remarkable difference between Roma and social class IV-V: experiencing three or more health problems, sight problems, and caries, in both sexes, and hearing problems and obesity, in women. Roma people are placed on an extreme position on the social gradient in health, a situation of extreme health inequality.
Luckwell, Jacquelynn; Denniff, Philip; Capper, Stephen; Michael, Paul; Spooner, Neil; Mallender, Philip; Johnson, Barry; Clegg, Sarah; Green, Mark; Ahmad, Sheelan; Woodford, Lynsey
2013-11-01
To ensure that PK data generated from DBS samples are of the highest quality, it is important that the paper substrate is uniform and does not unduly contribute to variability. This study investigated any within and between lot variations for four cellulose paper types: Whatman™ FTA(®) DMPK-A, -B and -C, and 903(®) (GE Healthcare, Buckinghamshire, UK). The substrates were tested to demonstrate manufacturing reproducibility (thickness, weight, chemical coating concentration) and its effect on the size of the DBS produced, and the quantitative data derived from the bioanalysis of human DBS samples containing six compounds of varying physicochemical properties. Within and between lot variations in paper thickness, mass and chemical coating concentration were within acceptable manufacturing limits. No variation in the spot size or bioanalytical data was observed. Bioanalytical results obtained for DBS samples containing a number of analytes spanning a range of chemical space are not affected by the lot used or by the location within a lot.
Pageler, Natalie M; Grazier G'Sell, Max Jacob; Chandler, Warren; Mailes, Emily; Yang, Christine; Longhurst, Christopher A
2016-09-01
The objective of this project was to use statistical techniques to determine the completeness and accuracy of data migrated during electronic health record conversion. Data validation during migration consists of mapped record testing and validation of a sample of the data for completeness and accuracy. We statistically determined a randomized sample size for each data type based on the desired confidence level and error limits. The only error identified in the post go-live period was a failure to migrate some clinical notes, which was unrelated to the validation process. No errors in the migrated data were found during the 12- month post-implementation period. Compared to the typical industry approach, we have demonstrated that a statistical approach to sampling size for data validation can ensure consistent confidence levels while maximizing efficiency of the validation process during a major electronic health record conversion. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
The more the heavier? Family size and childhood obesity in the U.S.
Datar, Ashlesha
2017-05-01
Childhood obesity remains a top public health concern and understanding its drivers is important for combating this epidemic. Contemporaneous trends in declining family size and increasing childhood obesity in the U.S. suggest that family size may be a potential contributor, but limited evidence exists. Using data from a national sample of children in the U.S. this study examines whether family size, measured by the number of siblings a child has, is associated with child BMI and obesity, and the possible mechanisms at work. The potential endogeneity of family size is addressed by using several complementary approaches including sequentially introducing of a rich set of controls, subgroup analyses, and estimating school fixed-effects and child fixed-effects models. Results suggest that having more siblings is associated with significantly lower BMI and lower likelihood of obesity. Children with siblings have healthier diets and watch less television. Family mealtimes, less eating out, reduced maternal work, and increased adult supervision of children are potential mechanisms through which family size is protective of childhood obesity. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Jingsong, E-mail: weijingsong@siom.ac.cn; Wang, Rui; University of Chinese Academy of Sciences, Beijing 100049
In this work, the resolving limit of maskless direct laser writing is overcome by cooperative manipulation from nonlinear reverse saturation absorption and thermal diffusion, where the nonlinear reverse saturation absorption can induce the formation of below diffraction-limited energy absorption spot, and the thermal diffusion manipulation can make the heat quantity at the central region of energy absorption spot propagate along the thin film thickness direction. The temperature at the central region of energy absorption spot transiently reaches up to melting point and realizes nanolithography. The sample “glass substrate/AgInSbTe” is prepared, where AgInSbTe is taken as nonlinear reverse saturation absorption thinmore » film. The below diffraction-limited energy absorption spot is simulated theoretically and verified experimentally by near-field spot scanning method. The “glass substrate/Al/AgInSbTe” sample is prepared, where the Al is used as thermal conductive layer to manipulate the thermal diffusion channel because the thermal diffusivity coefficient of Al is much larger than that of AgInSbTe. The direct laser writing is conducted by a setup with a laser wavelength of 650 nm and a converging lens of NA=0.85, the lithographic marks with a size of about 100 nm are obtained, and the size is only about 1/10 the incident focused spot. The experimental results indicate that the cooperative manipulation from nonlinear reverse saturation absorption and thermal diffusion is a good method to realize nanolithography in maskless direct laser writing with visible light.« less
Constraining martian atmospheric dust particle size distributions from MER Navcam observations.
NASA Astrophysics Data System (ADS)
Soderblom, J. M.; Smith, M. D.
2017-12-01
Atmospheric dust plays an important role in atmospheric dynamics by absorbing energy and influencing the thermal structure of the atmosphere [1]. The efficiency by which dust absorbs energy depends on its size and single-scattering albedo. Characterizing these properties and their variability is, thus, important in modeling atmospheric circulation. Near-sun observations of the martian sky from Viking Lander, Mars Pathfinder, and MER Pancam images have been used to characterize the atmospheric scattering phase function. The forward-scattering peak the atmospheric phase function is primarily controlled by the size of aerosol particles and is less sensitive to atmospheric opacity or particle shape and single-scattering albedo [2]. These observations, however, have been limited to scattering angles >5°. We use the MER Navcams, which experience little-to-no debilitating internal instrumental scattered light during near-Sun imaging, enabling measurements of the brightness of the martian sky down to very small scattering angles [3], making them more sensitive to aerosol particle size. Additionally, the Navcams band-pass wavelength is similar to the dust effective particle size, further increasing this sensitivity. These data sample a wide range of atmospheric conditions, including variations in the atmospheric dust loading across the entire martian year, as well as more rapid variations during the onset and dissipation of a global-scale dust storm. General circulation models (GCMs) predict a size-dependence for the transport of dust during dust storms that would result in both spatial (on regional-to-global scales) and temporal (days-to-months) variations in the dust size distribution [4]. The absolute calibration of these data, however, is limited. The instrument temperature measurement is limited to a single thermocouple on the Opportunity left Navcam CCD, and observations of the calibration target by Navcam are infrequent. We discuss ways to mitigate these uncertainties and provide improved recovery of dust particle size distributions from these data. [1] Gierasch and Goody, 1972, J. Atmos. Sci., 29, 400-402. [2] Hansen and Travis, 1974, Space Sci. Rev., 16, 527-610. [3] Soderblom et al., 2008; JGR E06S19. [4] Murphy et al., 1993, JGR 98(E2), 3197-3220.
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.