Gradually truncated log-normal in USA publicly traded firm size distribution
NASA Astrophysics Data System (ADS)
Gupta, Hari M.; Campanha, José R.; de Aguiar, Daniela R.; Queiroz, Gabriel A.; Raheja, Charu G.
2007-03-01
We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution.
Log-Normal Distribution of Cosmic Voids in Simulations and Mocks
NASA Astrophysics Data System (ADS)
Russell, E.; Pycke, J.-R.
2017-01-01
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.
LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu
2017-01-20
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less
Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows
NASA Technical Reports Server (NTRS)
McKenzie, D.; Savage, S.
2011-01-01
The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.
Box-Cox transformation of firm size data in statistical analysis
NASA Astrophysics Data System (ADS)
Chen, Ting Ting; Takaishi, Tetsuya
2014-03-01
Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.
[Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].
Zhu, Chun; Zhang, Xu
2010-10-01
Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.
A Bayesian Nonparametric Meta-Analysis Model
ERIC Educational Resources Information Center
Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.
2015-01-01
In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…
Testing models of parental investment strategy and offspring size in ants.
Gilboa, Smadar; Nonacs, Peter
2006-01-01
Parental investment strategies can be fixed or flexible. A fixed strategy predicts making all offspring a single 'optimal' size. Dynamic models predict flexible strategies with more than one optimal size of offspring. Patterns in the distribution of offspring sizes may thus reveal the investment strategy. Static strategies should produce normal distributions. Dynamic strategies should often result in non-normal distributions. Furthermore, variance in morphological traits should be positively correlated with the length of developmental time the traits are exposed to environmental influences. Finally, the type of deviation from normality (i.e., skewed left or right, or platykurtic) should be correlated with the average offspring size. To test the latter prediction, we used simulations to detect significant departures from normality and categorize distribution types. Data from three species of ants strongly support the predicted patterns for dynamic parental investment. Offspring size distributions are often significantly non-normal. Traits fixed earlier in development, such as head width, are less variable than final body weight. The type of distribution observed correlates with mean female dry weight. The overall support for a dynamic parental investment model has implications for life history theory. Predicted conflicts over parental effort, sex investment ratios, and reproductive skew in cooperative breeders follow from assumptions of static parental investment strategies and omnipresent resource limitations. By contrast, with flexible investment strategies such conflicts can be either absent or maladaptive.
Sample size determination for logistic regression on a logit-normal distribution.
Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance
2017-06-01
Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.
Starr, James C.; Torgersen, Christian E.
2015-01-01
We compared the assemblage structure, spatial distributions, and habitat associations of mountain whitefish (Prosopium williamsoni) morphotypes and size classes. We hypothesised that morphotypes would have different spatial distributions and would be associated with different habitat features based on feeding behaviour and diet. Spatially continuous sampling was conducted over a broad extent (29 km) in the Calawah River, WA (USA). Whitefish were enumerated via snorkelling in three size classes: small (10–29 cm), medium (30–49 cm), and large (≥50 cm). We identified morphotypes based on head and snout morphology: a pinocchio form that had an elongated snout and a normal form with a blunted snout. Large size classes of both morphotypes were distributed downstream of small and medium size classes, and normal whitefish were distributed downstream of pinocchio whitefish. Ordination of whitefish assemblages with nonmetric multidimensional scaling revealed that normal whitefish size classes were associated with higher gradient and depth, whereas pinocchio whitefish size classes were positively associated with pool area, distance upstream, and depth. Reach-scale generalised additive models indicated that normal whitefish relative density was associated with larger substrate size in downstream reaches (R2 = 0.64), and pinocchio whitefish were associated with greater stream depth in the reaches farther upstream (R2 = 0.87). These results suggest broad-scale spatial segregation (1–10 km), particularly between larger and more phenotypically extreme individuals. These results provide the first perspective on spatial distributions and habitat relationships of polymorphic mountain whitefish.
NASA Astrophysics Data System (ADS)
Laubach, S. E.; Hundley, T. H.; Hooker, J. N.; Marrett, R. A.
2018-03-01
Fault arrays typically include a wide range of fault sizes and those faults may be randomly located, clustered together, or regularly or periodically located in a rock volume. Here, we investigate size distribution and spatial arrangement of normal faults using rigorous size-scaling methods and normalized correlation count (NCC). Outcrop data from Miocene sedimentary rocks in the immediate upper plate of the regional Buckskin detachment-low angle normal-fault, have differing patterns of spatial arrangement as a function of displacement (offset). Using lower size-thresholds of 1, 0.1, 0.01, and 0.001 m, displacements range over 5 orders of magnitude and have power-law frequency distributions spanning ∼ four orders of magnitude from less than 0.001 m to more than 100 m, with exponents of -0.6 and -0.9. The largest faults with >1 m displacement have a shallower size-distribution slope and regular spacing of about 20 m. In contrast, smaller faults have steep size-distribution slopes and irregular spacing, with NCC plateau patterns indicating imposed clustering. Cluster widths are 15 m for the 0.1-m threshold, 14 m for 0.01-m, and 1 m for 0.001-m displacement threshold faults. Results demonstrate normalized correlation count effectively characterizes the spatial arrangement patterns of these faults. Our example from a high-strain fault pattern above a detachment is compatible with size and spatial organization that was influenced primarily by boundary conditions such as fault shape, mechanical unit thickness and internal stratigraphy on a range of scales rather than purely by interaction among faults during their propagation.
A new stochastic algorithm for inversion of dust aerosol size distribution
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Yang, Ma-ying
2015-08-01
Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.
NASA Astrophysics Data System (ADS)
Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio
2018-03-01
We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.
Size distribution of submarine landslides along the U.S. Atlantic margin
Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.
2009-01-01
Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.
Improved Root Normal Size Distributions for Liquid Atomization
2015-11-01
Jackson, Primary Breakup of Round Aerated- Liquid Jets in Supersonic Crossflows, Atomization and Sprays, 16(6), 657-672, 2006 H. C. Simmons, The...Breakup in Liquid - Gas Mixing Layers, Atomization and Sprays, 1, 421-440, 1991 P.-K. Wu, L.-K. Tseng, and G. M. Faeth, Primary Breakup in Gas / Liquid ...Improved Root Normal Size Distributions for Liquid Atomization Distribution Statement A. Approved for public release; distribution is unlimited
Size distribution and sorption of polychlorinated biphenyls during haze episodes
NASA Astrophysics Data System (ADS)
Zhu, Qingqing; Liu, Guorui; Zheng, Minghui; Zhang, Xian; Gao, Lirong; Su, Guijin; Liang, Yong
2018-01-01
There is a lack of studies on the size distribution of polychlorinated biphenyls (PCBs) during haze days, and their sorption mechanisms on aerosol particles remain unclear. In this study, PCBs in particle-sized aerosols from urban atmospheres of Beijing, China were investigated during haze and normal days. The concentrations, gas/particle partitioning, size distribution, and associated human daily intake of PCBs via inhalation were compared during haze days and normal days. Compared with normal days, higher particle mass-associated PCB levels were measured during haze days. The concentrations of ∑PCBs in particulate fractions were 11.9-134 pg/m3 and 6.37-14.9 pg/m3 during haze days and normal days, respectively. PCBs increased with decreasing particle size (>10 μm, 10-2.5 μm, 2.5-1.0 μm, and ≤1.0 μm). During haze days, PCBs were overwhelmingly associated with a fine particle fraction of ≤1.0 μm (64.6%), while during normal days the contribution was 33.7%. Tetra-CBs were the largest contributors (51.8%-66.7%) both in the gas and particle fractions during normal days. The profiles in the gas fraction were conspicuously different than those in the PM fractions during haze days, with di-CBs predominating in the gas fraction and higher homologues (tetra-CBs, penta-CBs, and hexa-CBs) concurrently accounting for most of the PM fractions. The mean-normalized size distributions of particulate mass and PCBs exhibited unimodal patterns, and a similar trend was observed for PCBs during both days. They all tended to be in the PM fraction of 1.0-2.5 μm. Adsorption might be the predominating mechanism for the gas-particle partitioning of PCBs during haze days, whereas absorption might be dominative during normal days.
Bidisperse and polydisperse suspension rheology at large solid fraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pednekar, Sidhant; Chun, Jaehun; Morris, Jeffrey F.
At the same solid volume fraction, bidisperse and polydisperse suspensions display lower viscosities, and weaker normal stress response, compared to monodisperse suspensions. The reduction of viscosity associated with size distribution can be explained by an increase of the maximum flowable, or jamming, solid fraction. In this work, concentrated or "dense" suspensions are simulated under strong shearing, where thermal motion and repulsive forces are negligible, but we allow for particle contact with a mild frictional interaction with interparticle friction coefficient of 0.2. Aspects of bidisperse suspension rheology are first revisited to establish that the approach reproduces established trends; the study ofmore » bidisperse suspensions at size ratios of large to small particle radii (2 to 4) shows that a minimum in the viscosity occurs for zeta slightly above 0.5, where zeta=phi_{large}/phi is the fraction of the total solid volume occupied by the large particles. The simple shear flows of polydisperse suspensions with truncated normal and log normal size distributions, and bidisperse suspensions which are statistically equivalent with these polydisperse cases up to third moment of the size distribution, are simulated and the rheologies are extracted. Prior work shows that such distributions with equivalent low-order moments have similar phi_{m}, and the rheological behaviors of normal, log normal and bidisperse cases are shown to be in close agreement for a wide range of standard deviation in particle size, with standard correlations which are functionally dependent on phi/phi_{m} providing excellent agreement with the rheology found in simulation. The close agreement of both viscosity and normal stress response between bi- and polydisperse suspensions demonstrates the controlling in influence of the maximum packing fraction in noncolloidal suspensions. Microstructural investigations and the stress distribution according to particle size are also presented.« less
A general approach to double-moment normalization of drop size distributions
NASA Astrophysics Data System (ADS)
Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.
2003-04-01
Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
Bowker, Matthew A.; Maestre, Fernando T.
2012-01-01
Dryland vegetation is inherently patchy. This patchiness goes on to impact ecology, hydrology, and biogeochemistry. Recently, researchers have proposed that dryland vegetation patch sizes follow a power law which is due to local plant facilitation. It is unknown what patch size distribution prevails when competition predominates over facilitation, or if such a pattern could be used to detect competition. We investigated this question in an alternative vegetation type, mosses and lichens of biological soil crusts, which exhibit a smaller scale patch-interpatch configuration. This micro-vegetation is characterized by competition for space. We proposed that multiplicative effects of genetics, environment and competition should result in a log-normal patch size distribution. When testing the prevalence of log-normal versus power law patch size distributions, we found that the log-normal was the better distribution in 53% of cases and a reasonable fit in 83%. In contrast, the power law was better in 39% of cases, and in 8% of instances both distributions fit equally well. We further hypothesized that the log-normal distribution parameters would be predictably influenced by competition strength. There was qualitative agreement between one of the distribution's parameters (μ) and a novel intransitive (lacking a 'best' competitor) competition index, suggesting that as intransitivity increases, patch sizes decrease. The correlation of μ with other competition indicators based on spatial segregation of species (the C-score) depended on aridity. In less arid sites, μ was negatively correlated with the C-score (suggesting smaller patches under stronger competition), while positive correlations (suggesting larger patches under stronger competition) were observed at more arid sites. We propose that this is due to an increasing prevalence of competition transitivity as aridity increases. These findings broaden the emerging theory surrounding dryland patch size distributions and, with refinement, may help us infer cryptic ecological processes from easily observed spatial patterns in the field.
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1976-01-01
A study was made of the field size distributions for LACIE test sites 5029, 5033, and 5039, People's Republic of China. Field lengths and widths were measured from LANDSAT imagery, and field area was statistically modeled. Field size parameters have log-normal or Poisson frequency distributions. These were normalized to the Gaussian distribution and theoretical population curves were made. When compared to fields in other areas of the same country measured in the previous study, field lengths and widths in the three LACIE test sites were 2 to 3 times smaller and areas were smaller by an order of magnitude.
Distribution of transvascular pathway sizes through the pulmonary microvascular barrier.
McNamee, J E
1987-01-01
Mathematical models of solute and water exchange in the lung have been helpful in understanding factors governing the volume flow rate and composition of pulmonary lymph. As experimental data and models become more encompassing, parameter identification becomes more difficult. Pore sizes in these models should approach and eventually become equivalent to actual physiological pathway sizes as more complex and accurate models are tried. However, pore sizes and numbers vary from model to model as new pathway sizes are added. This apparent inconsistency of pore sizes can be explained if it is assumed that the pulmonary blood-lymph barrier is widely heteroporous, for example, being composed of a continuous distribution of pathway sizes. The sieving characteristics of the pulmonary barrier are reproduced by a log normal distribution of pathway sizes (log mean = -0.20, log s.d. = 1.05). A log normal distribution of pathways in the microvascular barrier is shown to follow from a rather general assumption about the nature of the pulmonary endothelial junction.
Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer
2016-01-01
Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the normal distribution assumption can be successfully applied to MUAC. In light of this promising finding, further research is ongoing to evaluate the performance of a normal distribution based approach to estimating the prevalence of wasting using MUAC.
Ejected Particle Size Distributions from Shocked Metal Surfaces
Schauer, M. M.; Buttler, W. T.; Frayer, D. K.; ...
2017-04-12
Here, we present size distributions for particles ejected from features machined onto the surface of shocked Sn targets. The functional form of the size distributions is assumed to be log-normal, and the characteristic parameters of the distribution are extracted from the measured angular distribution of light scattered from a laser beam incident on the ejected particles. We also found strong evidence for a bimodal distribution of particle sizes with smaller particles evolved from features machined into the target surface and larger particles being produced at the edges of these features.
Ejected Particle Size Distributions from Shocked Metal Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, M. M.; Buttler, W. T.; Frayer, D. K.
Here, we present size distributions for particles ejected from features machined onto the surface of shocked Sn targets. The functional form of the size distributions is assumed to be log-normal, and the characteristic parameters of the distribution are extracted from the measured angular distribution of light scattered from a laser beam incident on the ejected particles. We also found strong evidence for a bimodal distribution of particle sizes with smaller particles evolved from features machined into the target surface and larger particles being produced at the edges of these features.
Growth models and the expected distribution of fluctuating asymmetry
Graham, John H.; Shimizu, Kunio; Emlen, John M.; Freeman, D. Carl; Merkel, John
2003-01-01
Multiplicative error accounts for much of the size-scaling and leptokurtosis in fluctuating asymmetry. It arises when growth involves the addition of tissue to that which is already present. Such errors are lognormally distributed. The distribution of the difference between two lognormal variates is leptokurtic. If those two variates are correlated, then the asymmetry variance will scale with size. Inert tissues typically exhibit additive error and have a gamma distribution. Although their asymmetry variance does not exhibit size-scaling, the distribution of the difference between two gamma variates is nevertheless leptokurtic. Measurement error is also additive, but has a normal distribution. Thus, the measurement of fluctuating asymmetry may involve the mixing of additive and multiplicative error. When errors are multiplicative, we recommend computing log E(l) − log E(r), the difference between the logarithms of the expected values of left and right sides, even when size-scaling is not obvious. If l and r are lognormally distributed, and measurement error is nil, the resulting distribution will be normal, and multiplicative error will not confound size-related changes in asymmetry. When errors are additive, such a transformation to remove size-scaling is unnecessary. Nevertheless, the distribution of l − r may still be leptokurtic.
NASA Astrophysics Data System (ADS)
Zhou, Yali; Zhang, Qizhi; Yin, Yixin
2015-05-01
In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.
An estimate of field size distributions for selected sites in the major grain producing countries
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1977-01-01
The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…
Sample Size Determination for One- and Two-Sample Trimmed Mean Tests
ERIC Educational Resources Information Center
Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng
2008-01-01
Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…
NASA Astrophysics Data System (ADS)
Yamada, Yuhei; Yamazaki, Yoshihiro
2018-04-01
This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.
Distribution of normal superficial ocular vessels in digital images.
Banaee, Touka; Ehsaei, Asieh; Pourreza, Hamidreza; Khajedaluee, Mohammad; Abrishami, Mojtaba; Basiri, Mohsen; Daneshvar Kakhki, Ramin; Pourreza, Reza
2014-02-01
To investigate the distribution of different-sized vessels in the digital images of the ocular surface, an endeavor which may provide useful information for future studies. This study included 295 healthy individuals. From each participant, four digital photographs of the superior and inferior conjunctivae of both eyes, with a fixed succession of photography (right upper, right lower, left upper, left lower), were taken with a slit lamp mounted camera. Photographs were then analyzed by a previously described algorithm for vessel detection in the digital images. The area (of the image) occupied by vessels (AOV) of different sizes was measured. Height, weight, fasting blood sugar (FBS) and hemoglobin levels were also measured and the relationship between these parameters and the AOV was investigated. These findings indicated a statistically significant difference in the distribution of the AOV among the four conjunctival areas. No significant correlations were noted between the AOV of each conjunctival area and the different demographic and biometric factors. Medium-sized vessels were the most abundant vessels in the photographs of the four investigated conjunctival areas. The AOV of the different sizes of vessels follows a normal distribution curve in the four areas of the conjunctiva. The distribution of the vessels in successive photographs changes in a specific manner, with the mean AOV becoming larger as the photos were taken from the right upper to the left lower area. The AOV of vessel sizes has a normal distribution curve and medium-sized vessels occupy the largest area of the photograph. Copyright © 2013 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Element enrichment factor calculation using grain-size distribution and functional data regression.
Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R
2015-01-01
In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Italian primary school-size distribution and the city-size: a complex nexus
NASA Astrophysics Data System (ADS)
Belmonte, Alessandro; di Clemente, Riccardo; Buldyrev, Sergey V.
2014-06-01
We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features.
EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.
Tong, Xiaoxiao; Bentler, Peter M
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.
Theory of the intermediate stage of crystal growth with applications to insulin crystallization
NASA Astrophysics Data System (ADS)
Barlow, D. A.
2017-07-01
A theory for the intermediate stage of crystal growth, where two defining equations one for population continuity and another for mass-balance, is used to study the kinetics of the supersaturation decay, the homogeneous nucleation rate, the linear growth rate and the final distribution of crystal sizes for the crystallization of bovine and porcine insulin from solution. The cited experimental reports suggest that the crystal linear growth rate is directly proportional to the square of the insulin concentration in solution for bovine insulin and to the cube of concentration for porcine. In a previous work, it was shown that the above mentioned system could be solved for the case where the growth rate is directly proportional to the normalized supersaturation. Here a more general solution is presented valid for cases where the growth rate is directly proportional to the normalized supersaturation raised to the power of any positive integer. The resulting expressions for the time dependent normalized supersaturation and crystal size distribution are compared with experimental reports for insulin crystallization. An approximation for the maximum crystal size at the end of the intermediate stage is derived. The results suggest that the largest crystal size in the distribution at the end of the intermediate stage is maximized when nucleation is restricted to be only homogeneous. Further, the largest size in the final distribution depends only weakly upon the initial supersaturation.
Lin, Chi-Chi; Huang, Hsiao-Lin; Hsiao, Wen-Yuan
2016-01-01
Past studies indicated particulates generated by waste incineration contain various hazardous compounds. The aerosol characteristics are very important for particulate hazard control and workers' protection. This study explores the detailed characteristics of emitted particulates from each important operation unit in a rotary kiln-based hazardous industrial waste incineration plant. A dust size analyzer (Grimm 1.109) and a scanning mobility particle sizer (SMPS) were used to measure the aerosol mass concentration, mass size distribution, and number size distribution at five operation units (S1-S5) during periods of normal operation, furnace shutdown, and annual maintenance. The place with the highest measured PM10 concentration was located at the area of fly ash discharge from air pollution control equipment (S5) during the period of normal operation. Fine particles (PM2.5) constituted the majority of the emitted particles from the incineration plant. The mass size distributions (elucidated) made it clear that the size of aerosols caused by the increased particulate mass, resulting from work activities, were mostly greater than 1.5 μm. Whereas the number size distributions showed that the major diameters of particulates that caused the increase of particulate number concentrations, from work activities, were distributed in the sub micrometer range. The process of discharging fly ash from air pollution control equipment can significantly increase the emission of nanoparticles. The mass concentrations and size distributions of emitted particulates were different at each operation unit. This information is valuable for managers to take appropriate strategy to reduce the particulate emission and associated worker exposure.
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
NASA Astrophysics Data System (ADS)
Matsubara, Yoshitsugu; Musashi, Yasuo
2017-12-01
The purpose of this study is to explain fluctuations in email size. We have previously investigated the long-term correlations between email send requests and data flow in the system log of the primary staff email server at a university campus, finding that email size frequency follows a power-law distribution with two inflection points, and that the power-law property weakens the correlation of the data flow. However, the mechanism underlying this fluctuation is not completely understood. We collected new log data from both staff and students over six academic years and analyzed the frequency distribution thereof, focusing on the type of content contained in the emails. Furthermore, we obtained permission to collect "Content-Type" log data from the email headers. We therefore collected the staff log data from May 1, 2015 to July 31, 2015, creating two subdistributions. In this paper, we propose a model to explain these subdistributions, which follow log-normal-like distributions. In the log-normal-like model, email senders -consciously or unconsciously- regulate the size of new email sentences according to a normal distribution. The fitting of the model is acceptable for these subdistributions, and the model demonstrates power-law properties for large email sizes. An analysis of the length of new email sentences would be required for further discussion of our model; however, to protect user privacy at the participating organization, we left this analysis for future work. This study provides new knowledge on the properties of email sizes, and our model is expected to contribute to the decision on whether to establish upper size limits in the design of email services.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
Power of tests of normality for detecting contaminated normal samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thode, H.C. Jr.; Smith, L.A.; Finch, S.J.
1981-01-01
Seventeen tests of normality or goodness of fit were evaluated for power at detecting a contaminated normal sample. This study used 1000 replications each of samples of size 12, 17, 25, 33, 50, and 100 from six different contaminated normal distributions. The kurtosis test was the most powerful over all sample sizes and contaminations. The Hogg and weighted Kolmogorov-Smirnov tests were second. The Kolmogorov-Smirnov, chi-squared, Anderson-Darling, and Cramer-von-Mises tests had very low power at detecting contaminated normal random variables. Tables of the power of the tests and the power curves of certain tests are given.
Karulin, Alexey Y.; Karacsony, Kinga; Zhang, Wenji; Targoni, Oleg S.; Moldovan, Ioana; Dittrich, Marcus; Sundararaman, Srividya; Lehmann, Paul V.
2015-01-01
Each positive well in ELISPOT assays contains spots of variable sizes that can range from tens of micrometers up to a millimeter in diameter. Therefore, when it comes to counting these spots the decision on setting the lower and the upper spot size thresholds to discriminate between non-specific background noise, spots produced by individual T cells, and spots formed by T cell clusters is critical. If the spot sizes follow a known statistical distribution, precise predictions on minimal and maximal spot sizes, belonging to a given T cell population, can be made. We studied the size distributional properties of IFN-γ, IL-2, IL-4, IL-5 and IL-17 spots elicited in ELISPOT assays with PBMC from 172 healthy donors, upon stimulation with 32 individual viral peptides representing defined HLA Class I-restricted epitopes for CD8 cells, and with protein antigens of CMV and EBV activating CD4 cells. A total of 334 CD8 and 80 CD4 positive T cell responses were analyzed. In 99.7% of the test cases, spot size distributions followed Log Normal function. These data formally demonstrate that it is possible to establish objective, statistically validated parameters for counting T cell ELISPOTs. PMID:25612115
Wang, Honglei; Zhu, Bin; Shen, Lijuan; Kang, Hanqing
2012-01-01
To investigate the impact on urban air pollution by crop residual burning outside Nanjing, aerosol concentration, pollution gas concentration, mass concentration, and water-soluble ion size distribution were observed during one event of November 4-9, 2010. Results show that the size distribution of aerosol concentration is bimodal on pollution days and normal days, with peak values at 60-70 and 200-300 nm, respectively. Aerosol concentration is 10(4) cm(-3) x nm(-1) on pollution days. The peak value of spectrum distribution of aerosol concentration on pollution days is 1.5-3.3 times higher than that on a normal day. Crop residual burning has a great impact on the concentration of fine particles. Diurnal variation of aerosol concentration is trimodal on pollution days and normal days, with peak values at 03:00, 09:00 and 19:00 local standard time. The first peak is impacted by meteorological elements, while the second and third peaks are due to human activities, such as rush hour traffic. Crop residual burning has the greatest impact on SO2 concentration, followed by NO2, O3 is hardly affected. The impact of crop residual burning on fine particles (< 2.1 microm) is larger than on coarse particles (> 2.1 microm), thus ion concentration in fine particles is higher than that in coarse particles. Crop residual burning leads to similar increase in all ion components, thus it has a small impact on the water-soluble ions order. Crop residual burning has a strong impact on the size distribution of K+, Cl-, Na+, and F- and has a weak impact on the size distributions of NH4+, Ca2+, NO3- and SO4(2-).
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
A New Bond Albedo for Performing Orbital Debris Brightness to Size Transformations
NASA Technical Reports Server (NTRS)
Mulrooney, Mark K.; Matney, Mark J.
2008-01-01
We have developed a technique for estimating the intrinsic size distribution of orbital debris objects via optical measurements alone. The process is predicated on the empirically observed power-law size distribution of debris (as indicated by radar RCS measurements) and the log-normal probability distribution of optical albedos as ascertained from phase (Lambertian) and range-corrected telescopic brightness measurements. Since the observed distribution of optical brightness is the product integral of the size distribution of the parent [debris] population with the albedo probability distribution, it is a straightforward matter to transform a given distribution of optical brightness back to a size distribution by the appropriate choice of a single albedo value. This is true because the integration of a powerlaw with a log-normal distribution (Fredholm Integral of the First Kind) yields a Gaussian-blurred power-law distribution with identical power-law exponent. Application of a single albedo to this distribution recovers a simple power-law [in size] which is linearly offset from the original distribution by a constant whose value depends on the choice of the albedo. Significantly, there exists a unique Bond albedo which, when applied to an observed brightness distribution, yields zero offset and therefore recovers the original size distribution. For physically realistic powerlaws of negative slope, the proper choice of albedo recovers the parent size distribution by compensating for the observational bias caused by the large number of small objects that appear anomalously large (bright) - and thereby skew the small population upward by rising above the detection threshold - and the lower number of large objects that appear anomalously small (dim). Based on this comprehensive analysis, a value of 0.13 should be applied to all orbital debris albedo-based brightness-to-size transformations regardless of data source. Its prima fascia genesis, derived and constructed from the current RCS to size conversion methodology (SiBAM Size-Based Estimation Model) and optical data reduction standards, assures consistency in application with the prior canonical value of 0.1. Herein we present the empirical and mathematical arguments for this approach and by example apply it to a comprehensive set of photometric data acquired via NASA's Liquid Mirror Telescopes during the 2000-2001 observing season.
NASA Technical Reports Server (NTRS)
Pitts, D. E.; Badhwar, G.
1980-01-01
The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.
The Italian primary school-size distribution and the city-size: a complex nexus
Belmonte, Alessandro; Di Clemente, Riccardo; Buldyrev, Sergey V.
2014-01-01
We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features. PMID:24954714
Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.
Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza
2017-01-01
To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P < 0.001), systolic (126.37 ± 20.25 vs. 119.21 ± 15.81 mmHg, P < 0.001) and diastolic (78.14 ± 14.21 vs. 67.54 ± 11.46 mmHg, P < 0.001) blood pressures. The distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P < 0.001) as well as between patients without retinopathy and those with non-proliferative diabetic retinopathy (NPDR); with larger AOV for smaller vessels in NPDR ( P < 0.001). Controlling for the effect of confounders, patients had a smaller total AOV, larger total SD of AOV, and a more skewed distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.
Is Coefficient Alpha Robust to Non-Normal Data?
Sheng, Yanyan; Sheng, Zhaohui
2011-01-01
Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Lognormal Behavior of the Size Distributions of Animation Characters
NASA Astrophysics Data System (ADS)
Yamamoto, Ken
This study investigates the statistical property of the character sizes of animation, superhero series, and video game. By using online databases of Pokémon (video game) and Power Rangers (superhero series), the height and weight distributions are constructed, and we find that the weight distributions of Pokémon and Zords (robots in Power Rangers) follow the lognormal distribution in common. For the theoretical mechanism of this lognormal behavior, the combination of the normal distribution and the Weber-Fechner law is proposed.
A short note on the maximal point-biserial correlation under non-normality.
Cheng, Ying; Liu, Haiyan
2016-11-01
The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.
Effect of particle size distribution on permeability in the randomly packed porous media
NASA Astrophysics Data System (ADS)
Markicevic, Bojan
2017-11-01
An answer of how porous medium heterogeneity influences the medium permeability is still inconclusive, where both increase and decrease in the permeability value are reported. A numerical procedure is used to generate a randomly packed porous material consisting of spherical particles. Six different particle size distributions are used including mono-, bi- and three-disperse particles, as well as uniform, normal and log-normal particle size distribution with the maximum to minimum particle size ratio ranging from three to eight for different distributions. In all six cases, the average particle size is kept the same. For all media generated, the stochastic homogeneity is checked from distribution of three coordinates of particle centers, where uniform distribution of x-, y- and z- positions is found. The medium surface area remains essentially constant except for bi-modal distribution in which medium area decreases, while no changes in the porosity are observed (around 0.36). The fluid flow is solved in such domain, and after checking for the pressure axial linearity, the permeability is calculated from the Darcy law. The permeability comparison reveals that the permeability of the mono-disperse medium is smallest, and the permeability of all poly-disperse samples is less than ten percent higher. For bi-modal particles, the permeability is for a quarter higher compared to the other media which can be explained by volumetric contribution of larger particles and larger passages for fluid flow to take place.
Mesh size selectivity of the gillnet in East China Sea
NASA Astrophysics Data System (ADS)
Li, L. Z.; Tang, J. H.; Xiong, Y.; Huang, H. L.; Wu, L.; Shi, J. J.; Gao, Y. S.; Wu, F. Q.
2017-07-01
A production test using several gillnets with various mesh sizes was carried out to discover the selectivity of gillnets in the East China Sea. The result showed that the composition of the catch species was synthetically affected by panel height and mesh size. The bycatch species of the 10-m nets were more than those of the 6-m nets. For target species, the effect of panel height on juvenile fish was ambiguous, but the number of juvenile fish declined quickly with the increase in mesh size. According to model deviance (D) and Akaike’s information criterion, the bi-normal model provided the best fit for small yellow croaker (Larimichthy polyactis), and the relative retention was 0.2 and 1, respectively. For Chelidonichthys spinosus, the log-normal was the best model; the right tilt of the selectivity curve was obvious and well coincided with the original data. The contact population of small yellow croaker showed a bi-normal distribution, and body lengths ranged from 95 to 215 mm. The contact population of C. spinosus showed a normal distribution, and the body lengths ranged from 95 to 205 mm. These results can provide references for coastal fishery management.
NASA Astrophysics Data System (ADS)
Liu, Yu; Qin, Shengwei; Hao, Qingguo; Chen, Nailu; Zuo, Xunwei; Rong, Yonghua
2017-03-01
The study of internal stress in quenched AISI 4140 medium carbon steel is of importance in engineering. In this work, the finite element simulation (FES) was employed to predict the distribution of internal stress in quenched AISI 4140 cylinders with two sizes of diameter based on exponent-modified (Ex-Modified) normalized function. The results indicate that the FES based on Ex-Modified normalized function proposed is better consistent with X-ray diffraction measurements of the stress distribution than FES based on normalized function proposed by Abrassart, Desalos and Leblond, respectively, which is attributed that Ex-Modified normalized function better describes transformation plasticity. Effect of temperature distribution on the phase formation, the origin of residual stress distribution and effect of transformation plasticity function on the residual stress distribution were further discussed.
ERIC Educational Resources Information Center
Bellera, Carine A.; Julien, Marilyse; Hanley, James A.
2010-01-01
The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…
Dieterich, J.H.; Kilgore, B.D.
1996-01-01
A procedure has been developed to obtain microscope images of regions of contact between roughened surfaces of transparent materials, while the surfaces are subjected to static loads or undergoing frictional slip. Static loading experiments with quartz, calcite, soda-lime glass and acrylic plastic at normal stresses to 30 MPa yield power law distributions of contact areas from the smallest contacts that can be resolved (3.5 ??m2) up to a limiting size that correlates with the grain size of the abrasive grit used to roughen the surfaces. In each material, increasing normal stress results in a roughly linear increase of the real area of contact. Mechanisms of contact area increase are by growth of existing contacts, coalescence of contacts and appearance of new contacts. Mean contacts stresses are consistent with the indentation strength of each material. Contact size distributions are insensitive to normal stress indicating that the increase of contact area is approximately self-similar. The contact images and contact distributions are modeled using simulations of surfaces with random fractal topographies. The contact process for model fractal surfaces is represented by the simple expedient of removing material at regions where surface irregularities overlap. Synthetic contact images created by this approach reproduce observed characteristics of the contacts and demonstrate that the exponent in the power law distributions depends on the scaling exponent used to generate the surface topography.
Multifrequency Retrieval of Cloud Ice Particle Size Distributions
2005-01-01
distribution ( Testud et al., 2001) to represent the PSD. The normalized gamma distribution has several advantages over a typical gamma PSD. A typical gamma...variation correlated with variation in ýL ( Testud et al., 2001). This variation on N, with P, requires a priori restrictions on the variance in R in...Geoscience & Rem. Sensing, 40, 541-549. Testud , J., S. Oury, R. A. Black, P. Amayenc, and X. Dou, 2001: The Concept of "Normalized" Distibution to Describe
Zhang, Xian; Zheng, Minghui; Liang, Yong; Liu, Guorui; Zhu, Qingqing; Gao, Lirong; Liu, Wenbin; Xiao, Ke; Sun, Xu
2016-12-15
Little information is available on the distributions of airborne polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) during haze days. In this study, PCDD/F concentrations, particle size distributions, and gas-particle partitioning in a Beijing suburban area during haze days and normal days were investigated. High PCDD/F concentrations, 3979-74,702fgm -3 (173-3885fgI-TEQm -3 ), were found during haze days and ~98% of the PCDD/Fs were associated with particles. Most PCDD/F congeners (>90%) were associated with particles. PCDD/F concentrations increased as particle sizes decreased and 95% of the particle-bound PCDD/Fs were associated with inhalable fine particles with aerodynamic diameters<2.5μm. PCDD/Fs were mainly absorbed in the particles and the Harner-Bidleman model predicted the particulate fractions of the PCDD/F congeners in the air samples well. The investigated PCDD/F concentrations and particle-bound distributions were different during normal days than during haze days. Temporal airborne PCDD/F trends in a suburban area during haze conditions could support better understanding of the exposure risk posed by toxic PCDD/Fs associated with fine particles. Copyright © 2016 Elsevier B.V. All rights reserved.
Role of Demographic Dynamics and Conflict in the Population-Area Relationship for Human Languages
Manrubia, Susanna C.; Axelsen, Jacob B.; Zanette, Damián H.
2012-01-01
Many patterns displayed by the distribution of human linguistic groups are similar to the ecological organization described for biological species. It remains a challenge to identify simple and meaningful processes that describe these patterns. The population size distribution of human linguistic groups, for example, is well fitted by a log-normal distribution that may arise from stochastic demographic processes. As we show in this contribution, the distribution of the area size of home ranges of those groups also agrees with a log-normal function. Further, size and area are significantly correlated: the number of speakers and the area spanned by linguistic groups follow the allometric relation , with an exponent varying accross different world regions. The empirical evidence presented leads to the hypothesis that the distributions of and , and their mutual dependence, rely on demographic dynamics and on the result of conflicts over territory due to group growth. To substantiate this point, we introduce a two-variable stochastic multiplicative model whose analytical solution recovers the empirical observations. Applied to different world regions, the model reveals that the retreat in home range is sublinear with respect to the decrease in population size, and that the population-area exponent grows with the typical strength of conflicts. While the shape of the population size and area distributions, and their allometric relation, seem unavoidable outcomes of demography and inter-group contact, the precise value of could give insight on the cultural organization of those human groups in the last thousand years. PMID:22815726
Broberg, Per
2013-07-19
One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmuth, R.A.
1979-03-01
Progress is reported on the energy conservation potential of Portland cement particle size distribution control. Results of preliminary concrete tests, Series IIIa and Series IIIb, effects of particle size ranges on strength and drying shrinkage, are presented. Series IV, effects of mixing and curing temperature, tests compare the properties of several good particle size controlled cements with normally ground cements at low and high temperatures. The work on the effects of high alkali and high sulfate clinker cements (Series V) has begun.
NASA Astrophysics Data System (ADS)
Stünitz, Holger; Keulen, Nynke; Hirose, Takehiro; Heilbronner, Renée
2010-01-01
Microstructures and grain size distribution from high velocity friction experiments are compared with those of slow deformation experiments of Keulen et al. (2007, 2008) for the same material (Verzasca granitoid). The mechanical behavior of granitoid gouge in fast velocity friction experiments at slip rates of 0.65 and 1.28 m/s and normal stresses of 0.4-0.9 MPa is characterized by slip weakening in a typical exponential friction coefficient vs displacement relationship. The grain size distributions yield similar D-values (slope of frequency versus grain size curve = 2.2-2.3) as those of slow deformation experiments (D = 2.0-2.3) for grain sizes larger than 1 μm. These values are independent of the total displacement above a shear strain of about γ = 20. The D-values are also independent of the displacement rates in the range of ˜1 μm/s to ˜1.3 m/s and do not vary in the normal stress range between 0.5 MPa and 500 MPa. With increasing displacement, grain shapes evolve towards more rounded and less serrated grains. While the grain size distribution remains constant, the progressive grain shape evolution suggests that grain comminution takes place by attrition at clast boundaries. Attrition produces a range of very small grain sizes by crushing with a D <-value = 1. The results of the study demonstrate that most cataclastic and gouge fault zones may have resulted from seismic deformation but the distinction of seismic and aseismic deformation cannot be made on the basis of grain size distribution.
NASA Astrophysics Data System (ADS)
Berthet, Gwenaël; Renard, Jean-Baptiste; Brogniez, Colette; Robert, Claude; Chartier, Michel; Pirre, Michel
2002-12-01
Aerosol extinction coefficients have been derived in the 375-700-nm spectral domain from measurements in the stratosphere since 1992, at night, at mid- and high latitudes from 15 to 40 km, by two balloonborne spectrometers, Absorption par les Minoritaires Ozone et NOx (AMON) and Spectroscopie d'Absorption Lunaire pour l'Observation des Minoritaires Ozone et NOx (SALOMON). Log-normal size distributions associated with the Mie-computed extinction spectra that best fit the measurements permit calculation of integrated properties of the distributions. Although measured extinction spectra that correspond to background aerosols can be reproduced by the Mie scattering model by use of monomodal log-normal size distributions, each flight reveals some large discrepancies between measurement and theory at several altitudes. The agreement between measured and Mie-calculated extinction spectra is significantly improved by use of bimodal log-normal distributions. Nevertheless, neither monomodal nor bimodal distributions permit correct reproduction of some of the measured extinction shapes, especially for the 26 February 1997 AMON flight, which exhibited spectral behavior attributed to particles from a polar stratospheric cloud event.
[Studies on the size distribution of airborne microbes at home in Beijing].
Fang, Zhi-Guo; Sun, Ping; Ouyang, Zhi-Yun; Liu, Peng; Sun, Li; Wang, Xiao-Yong
2013-07-01
The effect of airborne microbes on human health not only depends on their compositions (genera and species), but also on their concentrations and sizes. Moreover, there are different mechanisms of airborne microbes of different sizes with different effects on human health. The size distributions and median diameters were investigated in detail with imitated six-stage Andersen sampler in 31 selected family homes with children in Beijing. Results showed that there was similar distribution characteristics of airborne microbes in different home environment, different season, different child's sex, and different apartment's architecture, but different distribution characteristics between airborne bacteria and fungi were observed in family homes in Beijing. In general, although airborne bacteria and fungi were plotted with normal logarithmic distribution, the particle percentage of airborne bacteria increased gradually from stage 1 (> 8.2 microm) to stage 5 (1.0-2.0 microm), and then decreased dramatically in stage 6 (< 1.0 microm), the percentage of airborne fungi increased gradually from stage 1 to stage 4 (2.0-3.5 microm), and then decreased dramatically from stage 4 to stage 6. The size distributions of dominant fungi were different in different fungal genera. Cladosporium, Penicillium and Aspergillus were recorded with normal logarithmic distribution, with the highest percentage detected in stage 4, and Alternaria were observed with skew distribution, with the highest percentage detected in stage 2 (5.0-10.4 microm). Finally, the median diameters of airborne bacteria were larger than those of airborne fungi, and the lowest median diameter of airborne bacteria and fungi was found in winter, while there were no significant variations of airborne bacterial and fungal median diameters in spring, summer and autumn in a year in this study.
Empirical study of the tails of mutual fund size
NASA Astrophysics Data System (ADS)
Schwarzkopf, Yonathan; Farmer, J. Doyne
2010-06-01
The mutual fund industry manages about a quarter of the assets in the U.S. stock market and thus plays an important role in the U.S. economy. The question of how much control is concentrated in the hands of the largest players is best quantitatively discussed in terms of the tail behavior of the mutual fund size distribution. We study the distribution empirically and show that the tail is much better described by a log-normal than a power law, indicating less concentration than, for example, personal income. The results are highly statistically significant and are consistent across fifteen years. This contradicts a recent theory concerning the origin of the power law tails of the trading volume distribution. Based on the analysis in a companion paper, the log-normality is to be expected, and indicates that the distribution of mutual funds remains perpetually out of equilibrium.
The mechanical behavior of metal alloys with grain size distribution in a wide range of strain rates
NASA Astrophysics Data System (ADS)
Skripnyak, V. A.; Skripnyak, V. V.; Skripnyak, E. G.
2017-12-01
The paper discusses a multiscale simulation approach for the construction of grain structure of metals and alloys, providing high tensile strength with ductility. This work compares the mechanical behavior of light alloys and the influence of the grain size distribution in a wide range of strain rates. The influence of the grain size distribution on the inelastic deformation and fracture of aluminium and magnesium alloys is investigated by computer simulations in a wide range of strain rates. It is shown that the yield stress depends on the logarithm of the normalized strain rate for light alloys with a bimodal grain distribution and coarse-grained structure.
A log-normal distribution model for the molecular weight of aquatic fulvic acids
Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.
2000-01-01
The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.
Hoffmann, Aswin L; Nahum, Alan E
2013-10-07
The simple Linear-Quadratic (LQ)-based Withers iso-effect formula (WIF) is widely used in external-beam radiotherapy to derive a new tumour dose prescription such that there is normal-tissue (NT) iso-effect when changing the fraction size and/or number. However, as conventionally applied, the WIF is invalid unless the normal-tissue response is solely determined by the tumour dose. We propose a generalized WIF (gWIF) which retains the tumour prescription dose, but replaces the intrinsic fractionation sensitivity measure (α/β) by a new concept, the normal-tissue effective fractionation sensitivity, [Formula: see text], which takes into account both the dose heterogeneity in, and the volume effect of, the late-responding normal-tissue in question. Closed-form analytical expressions for [Formula: see text] ensuring exact normal-tissue iso-effect are derived for: (i) uniform dose, and (ii) arbitrary dose distributions with volume-effect parameter n = 1 from the normal-tissue dose-volume histogram. For arbitrary dose distributions and arbitrary n, a numerical solution for [Formula: see text] exhibits a weak dependence on the number of fractions. As n is increased, [Formula: see text] increases from its intrinsic value at n = 0 (100% serial normal-tissue) to values close to or even exceeding the tumour (α/β) at n = 1 (100% parallel normal-tissue), with the highest values of [Formula: see text] corresponding to the most conformal dose distributions. Applications of this new concept to inverse planning and to highly conformal modalities are discussed, as is the effect of possible deviations from LQ behaviour at large fraction sizes.
A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions
Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.
2005-01-01
Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.
Parametric modelling of cost data in medical studies.
Nixon, R M; Thompson, S G
2004-04-30
The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.
Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua
2016-03-01
This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.
NASA Astrophysics Data System (ADS)
Keene, W. C.; Long, M. S.; Duplessis, P.; Kieber, D. J.; Maben, J. R.; Frossard, A. A.; Kinsey, J. D.; Beaupre, S. R.; Lu, X.; Chang, R.; Zhu, Y.; Bisgrove, J.
2017-12-01
During a September-October 2016 cruise of the R/V Endeavor in the western North Atlantic Ocean, primary marine aerosol (PMA) was produced in a high capacity generator during day and night via detrainment of bubbles from biologically productive and oligotrophic seawater. The turbulent mixing of clean air and seawater in a Venturi nozzle produced bubble plumes with tunable size distributions. Physicochemical characteristics of size-resolved PMA and seawater were measured. PMA number production efficiencies per unit air detrained (PEnum) increased with increasing detainment rate. For given conditions, PEnum values summed over size distributions were roughly ten times greater than those for frits whereas normalized size distributions were similar. Results show that bubble size distributions significantly modulated number production fluxes but not relative shapes of corresponding size distributions. In contrast, mass production efficiencies (PEmass) decreased with increasing air detrainment and were similar to those for frits, consistent with the hypothesis that bubble rafts on the seawater surface modulate emissions of larger jet droplets that dominate PMA mass production. Production efficiencies of organic matter were about three times greater than those for frits whereas organic enrichment factors integrated over size distributions were similar.
Optical and Nanoparticle Analysis of Normal and Cancer Cells by Light Transmission Spectroscopy
NASA Astrophysics Data System (ADS)
Deatsch, Alison; Sun, Nan; Johnson, Jeffery; Stack, Sharon; Szajko, John; Sander, Christopher; Rebuyon, Roland; Easton, Judah; Tanner, Carol; Ruggiero, Steven
2015-03-01
We have investigated the optical properties of human oral and ovarian cancer and normal cells. Specifically, we have measured the absolute optical extinction for intra-cellular material (lysates) in aqueous suspension. Measurements were conducted over a wavelength range of 250 to 1000 nm with 1 nm resolution using Light Transmission Spectroscopy (LTS). This provides both the absolute extinction of materials under study and, with Mie inversion, the absolute number of particles of a given diameter as a function of diameter in the range of 1 to 3000 nm. Our preliminary studies show significant differences in both the extinction and particle size distributions associated with cancer versus normal cells, which appear to be correlated with differences in the particle size distribution in the range of approximately 50 to 250 nm. Especially significant is a clearly higher density of particles at about 100 nm and smaller for normal cells. Department of Physics, Harper Cancer Research Institute, and the Office of Research at the University of Notre Dame.
Effects of normalization on quantitative traits in association test
2009-01-01
Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414
Kassemi, Mohammad; Thompson, David
2016-09-01
An analytical Population Balance Equation model is developed and used to assess the risk of critical renal stone formation for astronauts during future space missions. The model uses the renal biochemical profile of the subject as input and predicts the steady-state size distribution of the nucleating, growing, and agglomerating calcium oxalate crystals during their transit through the kidney. The model is verified through comparison with published results of several crystallization experiments. Numerical results indicate that the model is successful in clearly distinguishing between 1-G normal and 1-G recurrent stone-former subjects based solely on their published 24-h urine biochemical profiles. Numerical case studies further show that the predicted renal calculi size distribution for a microgravity astronaut is closer to that of a recurrent stone former on Earth rather than to a normal subject in 1 G. This interestingly implies that the increase in renal stone risk level in microgravity is relatively more significant for a normal person than a stone former. However, numerical predictions still underscore that the stone-former subject carries by far the highest absolute risk of critical stone formation during space travel. Copyright © 2016 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Jerousek, Richard Gregory; Colwell, Josh; Hedman, Matthew M.; French, Richard G.; Marouf, Essam A.; Esposito, Larry; Nicholson, Philip D.
2017-10-01
The Cassini Ultraviolet Imaging Spectrograph (UVIS) and Visual and Infrared Mapping Spectrometer (VIMS) have measured ring optical depths over a wide range of viewing geometries at effective wavelengths of 0.15 μm and 2.9 μm respectively. Using Voyager S and X band radio occultations and the direct inversion of the forward scattered S band signal, Marouf et al. (1982), (1983), and Zebker et al. (1985) determined the power-law size distribution parameters assuming a minimum particle radius of 1 mm. Many further studies have also constrained aspects of the particle size distribution throughout the main rings. Marouf et al. (2008a) determined the smallest ring particles to have radii of 4-5 mm using Cassini RSS data. Harbison et al. (2013) used VIMS solar occultations and also found minimum particle sizes of 4-5 mm in the C ring with q ~ 3.1, where n(a)da=Ca^(-q)da is the assumed differential power-law size distribution for particles of radius a. Recent studies of excess variance in stellar signal by Colwell et al. (2017, submitted) constrain the cross-section-weighted effective particle radius to 1 m to several meters. Using the wide range of viewing geometries available to VIMS and UVIS stellar occultations we find that normal optical depth does not strongly depend on viewing geometry at 10km resolution (which would be the case if self-gravity wakes were present). Throughout the C ring, we fit power-law derived optical depths to those measured by UVIS, VIMS, and by the Cassini Radio Science Subsystem (RSS) at 0.94 and 3.6 cm wavelengths to constrain the four parameters of the size distribution at 10km radial resolution. We find significant amounts of particle size sorting throughout the region with a positive correlation between maximum particles size (amax) and normal optical depth with a mean value of amax ~ 3 m in the background C ring. This correlation is negative in the C ring plateaus. We find an inverse correlation in minimum particle radius with normal optical depth and a mean value of amin ~ 4 mm in the background C ring with slightly larger smallest particles in the C ring plateaus.
Money-center structures in dynamic banking systems
NASA Astrophysics Data System (ADS)
Li, Shouwei; Zhang, Minghui
2016-10-01
In this paper, we propose a dynamic model for banking systems based on the description of balance sheets. It generates some features identified through empirical analysis. Through simulation analysis of the model, we find that banking systems have the feature of money-center structures, that bank asset distributions are power-law distributions, and that contract size distributions are log-normal distributions.
Simulation techniques for estimating error in the classification of normal patterns
NASA Technical Reports Server (NTRS)
Whitsitt, S. J.; Landgrebe, D. A.
1974-01-01
Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.
The magnetized sheath of a dusty plasma with grains size distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ou, Jing, E-mail: ouj@ipp.ac.cn; Gan, Chunyun; Lin, Binbin
2015-05-15
The structure of a plasma sheath in the presence of dust grains size distribution (DGSD) is investigated in the multi-fluid framework. It is shown that effect of the dust grains with different sizes on the sheath structure is a collective behavior. The spatial distributions of electric potential, the electron and ion densities and velocities, and the dust grains surface potential are strongly affected by DGSD. The dynamics of dust grains with different sizes in the sheath depend on not only DGSD but also their radius. By comparison of the sheath structure, it is found that under the same expected valuemore » of DGSD condition, the sheath length is longer in the case of lognormal distribution than that in the case of uniform distribution. In two cases of normal and lognormal distributions, the sheath length is almost equal for the small variance of DGSD, and then the difference of sheath length increases gradually with increase in the variance.« less
Interpretations of family size distributions: The Datura example
NASA Astrophysics Data System (ADS)
Henych, Tomáš; Holsapple, Keith A.
2018-04-01
Young asteroid families are unique sources of information about fragmentation physics and the structure of their parent bodies, since their physical properties have not changed much since their birth. Families have different properties such as age, size, taxonomy, collision severity and others, and understanding the effect of those properties on our observations of the size-frequency distribution (SFD) of family fragments can give us important insights into the hypervelocity collision processes at scales we cannot achieve in our laboratories. Here we take as an example the very young Datura family, with a small 8-km parent body, and compare its size distribution to other families, with both large and small parent bodies, and created by both catastrophic and cratering formation events. We conclude that most likely explanation for the shallower size distribution compared to larger families is a more pronounced observational bias because of its small size. Its size distribution is perfectly normal when its parent body size is taken into account. We also discuss some other possibilities. In addition, we study another common feature: an offset or "bump" in the distribution occurring for a few of the larger elements. We hypothesize that it can be explained by a newly described regime of cratering, "spall cratering", which controls the majority of impact craters on the surface of small asteroids like Datura.
Extreme Mean and Its Applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.
1979-01-01
Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.
Structural changes of casein micelles in a calcium gradient film.
Gebhardt, Ronald; Burghammer, Manfred; Riekel, Christian; Roth, Stephan Volkher; Müller-Buschbaum, Peter
2008-04-09
Calcium gradients are prepared by sequentially filling a micropipette with casein solutions of varying calcium concentration and spreading them on glass slides. The casein film is formed by a solution casting process, which results in a macroscopically rough surface. Microbeam grazing incidence small-angle X-ray scattering (microGISAXS) is used to investigate the lateral size distribution of three main components in casein films: casein micelles, casein mini-micelles, and micellar calcium phosphate. At length scales within the beam size the film surface is flat and detection of size distribution in a macroscopic casein gradient becomes accessible. The model used to analyze the data is based on a set of three log-normal distributed particle sizes. Increasing calcium concentration causes a decrease in casein micelle diameter while the size of casein mini-micelles increases and micellar calcium phosphate particles remain unchanged.
Methane Leaks from Natural Gas Systems Follow Extreme Distributions.
Brandt, Adam R; Heath, Garvin A; Cooley, Daniel
2016-11-15
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.
Nordhei, Camilla; Ramstad, Astrid Lund; Nicholson, David G
2008-02-21
Nanophase cobalt, nickel and zinc ferrites, in which the crystallites are in the size range 4-25 nm, were synthesised by coprecipitation and subsequent annealing. X-Ray absorption spectroscopy using synchrotron radiation (supported by X-ray powder diffraction) was used to study the effects of particle size on the distributions of the metal atoms over the tetrahedral and octahedral sites of the spinel structure. Deviations from the bulk structure were found which are attributed to the significant influence of the surface on very small particles. Like the bulk material, nickel ferrite is an inverse spinel in the nanoregime, although the population of metals on the octahedral sites increases with decreasing particle size. Cobalt ferrite and zinc ferrite take the inverse and normal forms of the spinel structure respectively, but within the nanoregime both systems show similar trends in being partially inverted. Further, in zinc ferrite, unlike the normal bulk structure, the nanophase system involves mixed coordinations of zinc(ii) and iron(iii) consistent with increasing partial inversion with size.
Potential source identification for aerosol concentrations over a site in Northwestern India
NASA Astrophysics Data System (ADS)
Payra, Swagata; Kumar, Pramod; Verma, Sunita; Prakash, Divya; Soni, Manish
2016-03-01
The collocated measurements of aerosols size distribution (ASD) and aerosol optical thickness (AOT) are analyzed simultaneously using Grimm aerosol spectrometer and MICROTOP II Sunphotometer over Jaipur, capital of Rajasthan in India. The contrast temperature characteristics during winter and summer seasons of year 2011 are investigated in the present study. The total aerosol number concentration (TANC, 0.3-20 μm) during winter season was observed higher than in summer time and it was dominated by fine aerosol number concentration (FANC < 2 μm). Particles smaller than 0.8 μm (at aerodynamic size) constitute ~ 99% of all particles in winter and ~ 90% of particles in summer season. However, particles greater than 2 μm contribute ~ 3% and ~ 0.2% in summer and winter seasons respectively. The aerosols optical thickness shows nearly similar AOT values during summer and winter but corresponding low Angstrom Exponent (AE) values during summer than winter, respectively. In this work, Potential Source Contribution Function (PSCF) analysis is applied to identify locations of sources that influenced concentrations of aerosols over study area in two different seasons. PSCF analysis shows that the dust particles from Thar Desert contribute significantly to the coarse aerosol number concentration (CANC). Higher values of the PSCF in north from Jaipur showed the industrial areas in northern India to be the likely sources of fine particles. The variation in size distribution of aerosols during two seasons is clearly reflected in the log normal size distribution curves. The log normal size distribution curves reveals that the particle size less than 0.8 μm is the key contributor in winter for higher ANC.
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
NASA Astrophysics Data System (ADS)
Zuliani, Jocelyn E.; Tong, Shitang; Kirk, Donald W.; Jia, Charles Q.
2015-12-01
Electrochemical double-layer capacitors (EDLCs) use physical ion adsorption in the capacitive electrical double layer of high specific surface area (SSA) materials to store electrical energy. Previous work shows that the SSA-normalized capacitance increases when pore diameters are less than 1 nm. However, there still remains uncertainty about the charge storage mechanism since the enhanced SSA-normalized capacitance is not observed in all microporous materials. In previous studies, the total specific surface area and the chemical composition of the electrode materials were not controlled. The current work is the first reported study that systematically compares the performance of activated carbon prepared from the same raw material, with similar chemical composition and specific surface area, but different pore size distributions. Preparing samples with similar SSAs, but different pores sizes is not straightforward since increasing pore diameters results in decreasing the SSA. This study observes that the microporous activated carbon has a higher SSA-normalized capacitance, 14.1 μF cm-2, compared to the mesoporous material, 12.4 μF cm-2. However, this enhanced SSA-normalized capacitance is only observed above a threshold operating voltage. Therefore, it can be concluded that a minimum applied voltage is required to induce ion adsorption in these sub-nanometer micropores, which increases the capacitance.
Investigation of the Specht density estimator
NASA Technical Reports Server (NTRS)
Speed, F. M.; Rydl, L. M.
1971-01-01
The feasibility of using the Specht density estimator function on the IBM 360/44 computer is investigated. Factors such as storage, speed, amount of calculations, size of the smoothing parameter and sample size have an effect on the results. The reliability of the Specht estimator for normal and uniform distributions and the effects of the smoothing parameter and sample size are investigated.
Elemental composition and size distribution of particulates in Cleveland, Ohio
NASA Technical Reports Server (NTRS)
King, R. B.; Fordyce, J. S.; Neustadter, H. E.; Leibecki, H. F.
1975-01-01
Measurements were made of the elemental particle size distribution at five contrasting urban environments with different source-type distributions in Cleveland, Ohio. Air quality conditions ranged from normal to air pollution alert levels. A parallel network of high-volume cascade impactors (5-state) were used for simultaneous sampling on glass fiber surfaces for mass determinations and on Whatman-41 surfaces for elemental analysis by neutron activation for 25 elements. The elemental data are assessed in terms of distribution functions and interrelationships and are compared between locations as a function of resultant wind direction in an attempt to relate the findings to sources.
Elemental composition and size distribution of particulates in Cleveland, Ohio
NASA Technical Reports Server (NTRS)
Leibecki, H. F.; King, R. B.; Fordyce, J. S.; Neustadter, H. E.
1975-01-01
Measurements have been made of the elemental particle size distribution at five contrasting urban environments with different source-type distributions in Cleveland, Ohio. Air quality conditions ranged from normal to air pollution alert levels. A parallel network of high-volume cascade impactors (5-stage) were used for simultaneous sampling on glass fiber surfaces for mass determinations and on Whatman-41 surfaces for elemental analysis by neutron activation for 25 elements. The elemental data are assessed in terms of distribution functions and interrelationships and are compared between locations as a function of resultant wind direction in an attempt to relate the findings to sources.
The social architecture of capitalism
NASA Astrophysics Data System (ADS)
Wright, Ian
2005-02-01
A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
NASA Astrophysics Data System (ADS)
Baitimirova, M.; Osite, A.; Katkevics, J.; Viksna, A.
2012-08-01
Burning of candles generates particulate matter of fine dimensions that produces poor indoor air quality, so it may cause harmful impact on human health. In this study solid aerosol particles of burning of candles of different composition and kerosene combustion were collected in a closed laboratory system. Present work describes particulate matter collection for structure analysis and the relationship between source and size distribution of particulate matter. The formation mechanism of particulate matter and their tendency to agglomerate also are described. Particles obtained from kerosene combustion have normal size distribution. Whereas, particles generated from the burning of stearin candles have distribution shifted towards finer particle size range. If an additive of stearin to paraffin candle is used, particle size distribution is also observed in range of towards finer particles. A tendency to form agglomerates in a short time is observed in case of particles obtained from kerosene combustion, while in case of particles obtained from burning of candles of different composition such a tendency is not observed. Particles from candles and kerosene combustion are Aitken and accumulation mode particles
Size distribution of radon daughter particles in uranium mine atmospheres.
George, A C; Hinchliffe, L; Sladowski, R
1975-06-01
The size distribution of radon daughters was measured in several uranium mines using four compact diffusion batteries and a round jet cascade impactor. Simultaneously, measurements were made of uncombined fractions of radon daughters, radon concentration, working level and particle concentration. The size distributions found for radon daughters were log normal. The activity median diameters ranged from 0.09 mum to 0.3 mum with a mean value of 0.17 mum. Geometric standard deviations were in the range from 1.3 to 4 with a mean value of 2.7. Uncombined fractions expressed in accordance with the ICRP definition ranged from 0.004 to 0.16 with a mean value of 0.04. The radon daughter sizes in these mines are greater than the sizes assumed by various authors in calculating respiratory tract dose. The disparity may reflect the widening use of diesel-powered equipment in large uranium mines.
High Temperature Silicon Carbide (SiC) Traction Motor Drive
2011-08-09
UNCLASSIFIED Distribution Statement A. Approved for public release; distribution is unlimited. UNCLASSIFIED HIGH TEMPERATURE SILICON CARBIDE...be modular and conveniently distributed. Small component size and operation with high - temperature liquid coolant are essential factors in the...these densities, power modules capable of high - temperature operation were developed using SiC normally-off JFETs. This paper will discuss the unique
NASA Technical Reports Server (NTRS)
Craven, P. D.; Gary, G. A.
1972-01-01
The Mie theory of light scattering by spheres was used to calculate the scattered intensity functions resulting from single scattering in a polydispersed collection of spheres. The distribution used behaves according to the inverse fourth power law; graphs and tables for the angular dependence of the intensity and polarization for this law are given. The effects of the particle size range and the integration increment are investigated.
Montoro Bustos, Antonio R; Petersen, Elijah J; Possolo, Antonio; Winchester, Michael R
2015-09-01
Single particle inductively coupled plasma-mass spectrometry (spICP-MS) is an emerging technique that enables simultaneous measurement of nanoparticle size and number quantification of metal-containing nanoparticles at realistic environmental exposure concentrations. Such measurements are needed to understand the potential environmental and human health risks of nanoparticles. Before spICP-MS can be considered a mature methodology, additional work is needed to standardize this technique including an assessment of the reliability and variability of size distribution measurements and the transferability of the technique among laboratories. This paper presents the first post hoc interlaboratory comparison study of the spICP-MS technique. Measurement results provided by six expert laboratories for two National Institute of Standards and Technology (NIST) gold nanoparticle reference materials (RM 8012 and RM 8013) were employed. The general agreement in particle size between spICP-MS measurements and measurements by six reference techniques demonstrates the reliability of spICP-MS and validates its sizing capability. However, the precision of the spICP-MS measurement was better for the larger 60 nm gold nanoparticles and evaluation of spICP-MS precision indicates substantial variability among laboratories, with lower variability between operators within laboratories. Global particle number concentration and Au mass concentration recovery were quantitative for RM 8013 but significantly lower and with a greater variability for RM 8012. Statistical analysis did not suggest an optimal dwell time, because this parameter did not significantly affect either the measured mean particle size or the ability to count nanoparticles. Finally, the spICP-MS data were often best fit with several single non-Gaussian distributions or mixtures of Gaussian distributions, rather than the more frequently used normal or log-normal distributions.
NASA Astrophysics Data System (ADS)
Pu, Yang; Chen, Jun; Wang, Wubao
2014-02-01
The scattering coefficient, μs, the anisotropy factor, g, the scattering phase function, p(θ), and the angular dependence of scattering intensity distributions of human cancerous and normal prostate tissues were systematically investigated as a function of wavelength, scattering angle and scattering particle size using Mie theory and experimental parameters. The Matlab-based codes using Mie theory for both spherical and cylindrical models were developed and applied for studying the light propagation and the key scattering properties of the prostate tissues. The optical and structural parameters of tissue such as the index of refraction of cytoplasm, size of nuclei, and the diameter of the nucleoli for cancerous and normal human prostate tissues obtained from the previous biological, biomedical and bio-optic studies were used for Mie theory simulation and calculation. The wavelength dependence of scattering coefficient and anisotropy factor were investigated in the wide spectral range from 300 nm to 1200 nm. The scattering particle size dependence of μs, g, and scattering angular distributions were studied for cancerous and normal prostate tissues. The results show that cancerous prostate tissue containing larger size scattering particles has more contribution to the forward scattering in comparison with the normal prostate tissue. In addition to the conventional simulation model that approximately considers the scattering particle as sphere, the cylinder model which is more suitable for fiber-like tissue frame components such as collagen and elastin was used for developing a computation code to study angular dependence of scattering in prostate tissues. To the best of our knowledge, this is the first study to deal with both spherical and cylindrical scattering particles in prostate tissues.
Does Litter Size Variation Affect Models of Terrestrial Carnivore Extinction Risk and Management?
Devenish-Nelson, Eleanor S.; Stephens, Philip A.; Harris, Stephen; Soulsbury, Carl; Richards, Shane A.
2013-01-01
Background Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. Methodology/Principal Findings We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species – the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. Conclusion/Significance These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes. PMID:23469140
Does litter size variation affect models of terrestrial carnivore extinction risk and management?
Devenish-Nelson, Eleanor S; Stephens, Philip A; Harris, Stephen; Soulsbury, Carl; Richards, Shane A
2013-01-01
Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species - the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes.
Wan, Gwo-Hwa; Wu, Chieh-Liang; Chen, Yi-Fang; Huang, Sheng-Hsiu; Wang, Yu-Ling; Chen, Chun-Wan
2014-01-01
Humans produce exhaled breath particles (EBPs) during various breath activities, such as normal breathing, coughing, talking, and sneezing. Airborne transmission risk exists when EBPs have attached pathogens. Until recently, few investigations had evaluated the size and concentration distributions of EBPs from mechanically ventilated patients with different ventilation mode settings. This study thus broke new ground by not only evaluating the size concentration distributions of EBPs in mechanically ventilated patients, but also investigating the relationship between EBP level and positive expiratory end airway pressure (PEEP), tidal volume, and pneumonia. This investigation recruited mechanically ventilated patients, with and without pneumonia, aged 20 years old and above, from the respiratory intensive care unit of a medical center. Concentration distributions of EBPs from mechanically ventilated patients were analyzed with an optical particle analyzer. This study finds that EBP concentrations from mechanically ventilated patients during normal breathing were in the range 0.47-2,554.04 particles/breath (0.001-4.644 particles/mL). EBP concentrations did not differ significantly between the volume control and pressure control modes of the ventilation settings in the mechanically ventilated patients. The patient EBPs were sized below 5 µm, and 80% of them ranged from 0.3 to 1.0 µm. The EBPs concentrations in patients with high PEEP (> 5 cmH₂O) clearly exceeded those in patients with low PEEP (≤ 5 cmH₂O). Additionally, a significant negative association existed between pneumonia duration and EBPs concentration. However, tidal volume was not related to EBPs concentration.
Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales
NASA Astrophysics Data System (ADS)
Laherrère, J.; Sornette, D.
1998-04-01
To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hualin, E-mail: hualin.zhang@northwestern.edu; Donnelly, Eric D.; Strauss, Jonathan B.
Purpose: To evaluate high-dose-rate (HDR) vaginal cuff brachytherapy (VCBT) in the treatment of endometrial cancer in a cylindrical target volume with either a varied or a constant cancer cell distributions using the linear quadratic (LQ) model. Methods: A Monte Carlo (MC) technique was used to calculate the 3D dose distribution of HDR VCBT over a variety of cylinder diameters and treatment lengths. A treatment planning system (TPS) was used to make plans for the various cylinder diameters, treatment lengths, and prescriptions using the clinical protocol. The dwell times obtained from the TPS were fed into MC. The LQ model wasmore » used to evaluate the therapeutic outcome of two brachytherapy regimens prescribed either at 0.5 cm depth (5.5 Gy × 4 fractions) or at the vaginal mucosal surface (8.8 Gy × 4 fractions) for the treatment of endometrial cancer. An experimentally determined endometrial cancer cell distribution, which showed a varied and resembled a half-Gaussian distribution, was used in radiobiology modeling. The equivalent uniform dose (EUD) to cancer cells was calculated for each treatment scenario. The therapeutic ratio (TR) was defined by comparing VCBT with a uniform dose radiotherapy plan in term of normal cell survival at the same level of cancer cell killing. Calculations of clinical impact were run twice assuming two different types of cancer cell density distributions in the cylindrical target volume: (1) a half-Gaussian or (2) a uniform distribution. Results: EUDs were weakly dependent on cylinder size, treatment length, and the prescription depth, but strongly dependent on the cancer cell distribution. TRs were strongly dependent on the cylinder size, treatment length, types of the cancer cell distributions, and the sensitivity of normal tissue. With a half-Gaussian distribution of cancer cells which populated at the vaginal mucosa the most, the EUDs were between 6.9 Gy × 4 and 7.8 Gy × 4, the TRs were in the range from (5.0){sup 4} to (13.4){sup 4} for the radiosensitive normal tissue depending on the cylinder size, treatment lengths, prescription depth, and dose as well. However, for a uniform cancer cell distribution, the EUDs were between 6.3 Gy × 4 and 7.1 Gy × 4, and the TRs were found to be between (1.4){sup 4} and (1.7){sup 4}. For the uniformly interspersed cancer and radio-resistant normal cells, the TRs were less than 1. The two VCBT prescription regimens were found to be equivalent in terms of EUDs and TRs. Conclusions: HDR VCBT strongly favors cylindrical target volume with the cancer cell distribution following its dosimetric trend. Assuming a half-Gaussian distribution of cancer cells, the HDR VCBT provides a considerable radiobiological advantage over the external beam radiotherapy (EBRT) in terms of sparing more normal tissues while maintaining the same level of cancer cell killing. But for the uniform cancer cell distribution and radio-resistant normal tissue, the radiobiology outcome of the HDR VCBT does not show an advantage over the EBRT. This study strongly suggests that radiation therapy design should consider the cancer cell distribution inside the target volume in addition to the shape of target.« less
Grain-size variations on a longitudinal dune and a barchan dune
NASA Astrophysics Data System (ADS)
Watson, Andrew
1986-01-01
The grain-size characteristics of the sand upon two dunes—a 40 m high longitudinal dune in the central Namib Desert and a 6.0 m high barchan in the Jafurah sand sea of Saudi Arabia—vary with position on the dunes. On the longitudinal dune, median grain size decreases, sorting improves and the grain-size distributions are less skewed and more normalized toward the crest. Though sand at the windward toe is distinct, elsewhere on the dune the changes in grain-size characteristics are gradual. An abrupt change in grain size and sorting near the crest—as described by Bagnold (1941, pp. 226-229)—is not well represented on this dune. Coarse grains remain as a lag on concave slope units and small particles are winnowed from the sand on the steepest windward slopes near the crest. Avalanching down slipfaces at the crest acts only as a supplementary grading mechanism. On the barchan dune median grain size also decreases near the crest, but sorting becomes poorer, though the grain-size distributions are more symmetric and more normalized. The dune profile is a Gaussian curve with a broad convex zone at the apex upon which topset beds had accreted prior to sampling. Grain size increases and sorting improves down the dune's slipface. However, this grading mechanism does not influence sand on the whole dune because variations in wind regime bring about different modes of dune accretion. On both dunes, height and morphology appear to influence significantly the grain-size characteristics.
Influence of vascular normalization on interstitial flow and delivery of liposomes in tumors
NASA Astrophysics Data System (ADS)
Ozturk, Deniz; Yonucu, Sirin; Yilmaz, Defne; Burcin Unlu, Mehmet
2015-02-01
Elevated interstitial fluid pressure is one of the barriers of drug delivery in solid tumors. Recent studies have shown that normalization of tumor vasculature by anti-angiogenic factors may improve the delivery of conventional cytotoxic drugs, possibly by increasing blood flow, decreasing interstitial fluid pressure, and enhancing the convective transvascular transport of drug molecules. Delivery of large therapeutic agents such as nanoparticles and liposomes might also benefit from normalization therapy since their transport depends primarily on convection. In this study, a mathematical model is presented to provide supporting evidence that normalization therapy may improve the delivery of 100 nm liposomes into solid tumors, by both increasing the total drug extravasation and providing a more homogeneous drug distribution within the tumor. However these beneficial effects largely depend on tumor size and are stronger for tumors within a certain size range. It is shown that this size effect may persist under different microenvironmental conditions and for tumors with irregular margins or heterogeneous blood supply.
Casein micelles: size distribution in milks from individual cows.
de Kruif, C G Kees; Huppertz, Thom
2012-05-09
The size distribution and protein composition of casein micelles in the milk of Holstein-Friesian cows was determined as a function of stage and number of lactations. Protein composition did not vary significantly between the milks of different cows or as a function of lactation stage. Differences in the size and polydispersity of the casein micelles were observed between the milks of different cows, but not as a function of stage of milking or stage of lactation and not even over successive lactations periods. Modal radii varied from 55 to 70 nm, whereas hydrodynamic radii at a scattering angle of 73° (Q² = 350 μm⁻²) varied from 77 to 115 nm and polydispersity varied from 0.27 to 0.41, in a log-normal distribution. Casein micelle size in the milks of individual cows was not correlated with age, milk production, or lactation stage of the cows or fat or protein content of the milk.
Effects of composition of grains of debris flow on its impact force
NASA Astrophysics Data System (ADS)
Tang, jinbo; Hu, Kaiheng; Cui, Peng
2017-04-01
Debris flows compose of solid material with broad size distribution from fine sand to boulders. Impact force imposed by debris flows is a very important issue for protection engineering design and strongly influenced by their grain composition. However, this issue has not been studied in depth and the effects of grain composition not been considered in the calculation of the impact force. In this present study, the small-scale flume experiments with five kinds of compositions of grains for debris flow were carried out to study the effect of the composition of grains of debris flow on its impact force. The results show that the impact force of debris flow increases with the grain size, the hydrodynamic pressure of debris flow is calibrated based on the normalization parameter dmax/d50, in which dmax is the maximum size and d50 is the median size. Furthermore, a log-logistic statistic distribution could be used to describe the distribution of magnitude of impact force of debris flow, where the mean and the variance of the present distribution increase with grain size. This distribution proposed in the present study could be used to the reliability analysis of structures impacted by debris flow.
NASA Technical Reports Server (NTRS)
Peters, C. (Principal Investigator)
1980-01-01
A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.
Spatial organization of surface nanobubbles and its implications in their formation process.
Lhuissier, Henri; Lohse, Detlef; Zhang, Xuehua
2014-02-21
We study the size and spatial distribution of surface nanobubbles formed by the solvent exchange method to gain insight into the mechanism of their formation. The analysis of Atomic Force Microscopy (AFM) images of nanobubbles formed on a hydrophobic surface reveals that the nanobubbles are not randomly located, which we attribute to the role of the history of nucleation during the formation. Moreover, the size of each nanobubble is found to be strongly correlated with the area of the bubble-depleted zone around it. The precise correlation suggests that the nanobubbles grow by diffusion of the gas from the bulk rather than by diffusion of the gas adsorbed on the surface. Lastly, the size distribution of the nanobubbles is found to be well described by a log-normal distribution.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
Šmarda, Petr; Bureš, Petr; Horová, Lucie
2007-01-01
Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968
Aggregate and Individual Replication Probability within an Explicit Model of the Research Process
ERIC Educational Resources Information Center
Miller, Jeff; Schwarz, Wolf
2011-01-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Nanoparticle Distributions in Cancer and other Cells from Light Transmission Spectroscopy
NASA Astrophysics Data System (ADS)
Deatsch, Alison; Sun, Nan; Johnson, Jeffery; Stack, Sharon; Tanner, Carol; Ruggiero, Steven
We have measured the optical properties of whole cells and lysates using light transmission spectroscopy (LTS). LTS provides both the optical extinction coefficient in the wavelength range from 220 to 1100 nm and (by spectral inversion using a Mie model) the particle distribution density in the size range from 1 to 3000 nm. Our current work involves whole cells and lysates of cultured human oral cells and other plant and animal cells. We have found systematic differences in the optical extinction between cancer and normal whole cells and lysates, which translate to different particle size distributions (PSDs) for these materials. We have also found specific power-law dependences of particle density with particle diameter for cell lysates. This suggests a universality of the packing distribution in cells that can be compared to ideal Apollonian packing, with the cell modeled as a fractal body comprised of spheres on all size scales.
Scaling of size distributions of C60 and C70 fullerene surface islands
NASA Astrophysics Data System (ADS)
Dubrovskii, V. G.; Berdnikov, Y.; Olyanich, D. A.; Mararov, V. V.; Utas, T. V.; Zotov, A. V.; Saranin, A. A.
2017-06-01
We present experimental data and a theoretical analysis for the size distributions of C60 and C70 surface islands deposited onto In-modified Si(111)√3 × √3-Au surface under different conditions. We show that both fullerene islands feature an analytic Vicsek-Family scaling shape where the scaled size distributions are given by a power law times an incomplete beta-function with the required normalization. The power exponent in this distribution corresponds to the fractal shape of two-dimensional islands, confirmed by the experimentally observed morphologies. Quite interestingly, we do not see any significant difference between C60 and C70 fullerenes in terms of either scaling parameters or temperature dependence of the diffusion constants. In particular, we deduce the activation energy for surface diffusion of ED = 140 ± 10 meV for both types of fullerenes.
Enhanced centrifuge-based approach to powder characterization
NASA Astrophysics Data System (ADS)
Thomas, Myles Calvin
Many types of manufacturing processes involve powders and are affected by powder behavior. It is highly desirable to implement tools that allow the behavior of bulk powder to be predicted based on the behavior of only small quantities of powder. Such descriptions can enable engineers to significantly improve the performance of powder processing and formulation steps. In this work, an enhancement of the centrifuge technique is proposed as a means of powder characterization. This enhanced method uses specially designed substrates with hemispherical indentations within the centrifuge. The method was tested using simulations of the momentum balance at the substrate surface. Initial simulations were performed with an ideal powder containing smooth, spherical particles distributed on substrates designed with indentations. The van der Waals adhesion between the powder, whose size distribution was based on an experimentally-determined distribution from a commercial silica powder, and the indentations was calculated and compared to the removal force created in the centrifuge. This provided a way to relate the powder size distribution to the rotational speed required for particle removal for various indentation sizes. Due to the distinct form of the data from these simulations, the cumulative size distribution of the powder and the Hamaker constant for the system were be extracted. After establishing adhesion force characterization for an ideal powder, the same proof-of-concept procedure was followed for a more realistic system with a simulated rough powder modeled as spheres with sinusoidal protrusions and intrusions around the surface. From these simulations, it was discovered that an equivalent powder of smooth spherical particles could be used to describe the adhesion behavior of the rough spherical powder by establishing a size-dependent 'effective' Hamaker constant distribution. This development made it possible to describe the surface roughness effects of the entire powder through one adjustable parameter that was linked to the size distribution. It is important to note that when the engineered substrates (hemispherical indentations) were applied, it was possible to extract both powder size distribution and effective Hamaker constant information from the simulated centrifuge adhesion experiments. Experimental validation of the simulated technique was performed with a silica powder dispersed onto a stainless steel substrate with no engineered surface features. Though the proof-of-concept work was accomplished for indented substrates, non-ideal, relatively flat (non-indented) substrates were used experimentally to demonstrate that the technique can be extended to this case. The experimental data was then used within the newly developed simulation procedure to show its application to real systems. In the absence of engineered features on the substrates, it was necessary to specify the size distribution of the powder as an input to the simulator. With this information, it was possible to extract an effective Hamaker constant distribution and when the effective Hamaker constant distribution was applied in conjunction with the size distribution, the observed adhesion force distribution was described precisely. An equation was developed that related the normalized effective Hamaker constants (normalized by the particle diameter) to the particle diameter was formulated from the effective Hamaker constant distribution. It was shown, by application of the equation, that the adhesion behavior of an ideal (smooth, spherical) powder with an experimentally-validated, effective Hamaker constant distribution could be used to effectively represent that of a realistic powder. Thus, the roughness effects and size variations of a real powder are captured in this one distributed parameter (effective Hamaker constant distribution) which provides a substantial improvement to the existing technique. This can lead to better optimization of powder processing by enhancing powder behavior models.
NASA Astrophysics Data System (ADS)
Jumelet, Julien; David, Christine; Bekki, Slimane; Keckhut, Philippe
2009-01-01
The determination of stratospheric particle microphysical properties from multiwavelength lidar, including Rayleigh and/or Raman detection, has been widely investigated. However, most lidar systems are uniwavelength operating at 532 nm. Although the information content of such lidar data is too limited to allow the retrieval of the full size distribution, the coupling of two or more uniwavelength lidar measurements probing the same moving air parcel may provide some meaningful size information. Within the ORACLE-O3 IPY project, the coordination of several ground-based lidars and the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) space-borne lidar is planned during measurement campaigns called MATCH-PSC (Polar Stratospheric Clouds). While probing the same moving air masses, the evolution of the measured backscatter coefficient (BC) should reflect the variation of particles microphysical properties. A sensitivity study of 532 nm lidar particle backscatter to variations of particles size distribution parameters is carried out. For simplicity, the particles are assumed to be spherical (liquid) particles and the size distribution is represented with a unimodal log-normal distribution. Each of the four microphysical parameters (i.e. log-normal size distribution parameters, refractive index) are analysed separately, while the three others are remained set to constant reference values. Overall, the BC behaviour is not affected by the initial values taken as references. The total concentration (N0) is the parameter to which BC is least sensitive, whereas it is most sensitive to the refractive index (m). A 2% variation of m induces a 15% variation of the lidar BC, while the uncertainty on the BC retrieval can also reach 15%. This result underlines the importance of having both an accurate lidar inversion method and a good knowledge of the temperature for size distribution retrieval techniques. The standard deviation ([sigma]) is the second parameter to which BC is most sensitive to. Yet, the impact of m and [sigma] on BC variations is limited by the realistic range of their variations. The mean radius (rm) of the size distribution is thus the key parameter for BC, as it can vary several-fold. BC is most sensitive to the presence of large particles. The sensitivity of BC to rm and [sigma] variations increases when the initial size distributions are characterized by low rm and large [sigma]. This makes lidar more suitable to detect particles growing on background aerosols than on volcanic aerosols.
1980-02-01
size distribution and aerosol particle concentrations during a winter period in Mitzpe Ramon. Negev desert. Fig. 5 Comparison of normalized frequency...Israel. The upper two samples are from Mitzpe Ramon in the Negev desert. The bottom three are from Tel Aviv. The lighter of the three Tel Aviv samples (the...shown show much higher imaginary indices than do those from the Negev desert or the two American desert localities (Lindberg et al., 1976). Fig. 12
A comparative review of methods for comparing means using partially paired data.
Guo, Beibei; Yuan, Ying
2017-06-01
In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.
Impact of aerosol size representation on modeling aerosol-cloud interactions
Zhang, Y.; Easter, R. C.; Ghan, S. J.; ...
2002-11-07
In this study, we use a 1-D version of a climate-aerosol-chemistry model with both modal and sectional aerosol size representations to evaluate the impact of aerosol size representation on modeling aerosol-cloud interactions in shallow stratiform clouds observed during the 2nd Aerosol Characterization Experiment. Both the modal (with prognostic aerosol number and mass or prognostic aerosol number, surface area and mass, referred to as the Modal-NM and Modal-NSM) and the sectional approaches (with 12 and 36 sections) predict total number and mass for interstitial and activated particles that are generally within several percent of references from a high resolution 108-section approach.more » The modal approach with prognostic aerosol mass but diagnostic number (referred to as the Modal-M) cannot accurately predict the total particle number and surface areas, with deviations from the references ranging from 7-161%. The particle size distributions are sensitive to size representations, with normalized absolute differences of up to 12% and 37% for the 36- and 12-section approaches, and 30%, 39%, and 179% for the Modal-NSM, Modal-NM, and Modal-M, respectively. For the Modal-NSM and Modal-NM, differences from the references are primarily due to the inherent assumptions and limitations of the modal approach. In particular, they cannot resolve the abrupt size transition between the interstitial and activated aerosol fractions. For the 12- and 36-section approaches, differences are largely due to limitations of the parameterized activation for non-log-normal size distributions, plus the coarse resolution for the 12-section case. Differences are larger both with higher aerosol (i.e., less complete activation) and higher SO2 concentrations (i.e., greater modification of the initial aerosol distribution).« less
Generating Multivariate Ordinal Data via Entropy Principles.
Lee, Yen; Kaplan, David
2018-03-01
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.
[Airborne Fungal Aerosol Concentration and Distribution Characteristics in Air- Conditioned Wards].
Zhang, Hua-ling; Feng, He-hua; Fang, Zi-liang; Wang, Ben-dong; Li, Dan
2015-04-01
The effects of airborne fungus on human health in the hospital environment are related to not only their genera and concentrations, but also their particle sizes and distribution characteristics. Moreover, the mechanisms of aerosols with different particle sizes on human health are different. Fungal samples were obtained in medicine wards of Chongqing using a six-stage sampler. The airborne fungal concentrations, genera and size distributions of all the sampling wards were investigated and identified in detail. Results showed that airborne fungal concentrations were not correlated to the diseases or personnel density, but were related to seasons, temperature, and relative humidity. The size distribution rule had roughly the same for testing wards in winter and summer. The size distributions were not related with diseases and seasons, the percentage of airborne fungal concentrations increased gradually from stage I to stage III, and then decreased dramatically from stage V to stage VI, in general, the size of airborne fungi was a normal distribution. There was no markedly difference for median diameter of airborne fungi which was less 3.19 μm in these wards. There were similar dominant genera in all wards. They were Aspergillus spp, Penicillium spp and Alternaria spp. Therefore, attention should be paid to improve the filtration efficiency of particle size of 1.1-4.7 μm for air conditioning system of wards. It also should be targeted to choose appropriate antibacterial methods and equipment for daily hygiene and air conditioning system operation management.
Spatial event cluster detection using an approximate normal distribution.
Torabi, Mahmoud; Rosychuk, Rhonda J
2008-12-12
In geographic surveillance of disease, areas with large numbers of disease cases are to be identified so that investigations of the causes of high disease rates can be pursued. Areas with high rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. Typically cluster detection tests are applied to incident or prevalent cases of disease, but surveillance of disease-related events, where an individual may have multiple events, may also be of interest. Previously, a compound Poisson approach that detects clusters of events by testing individual areas that may be combined with their neighbours has been proposed. However, the relevant probabilities from the compound Poisson distribution are obtained from a recursion relation that can be cumbersome if the number of events are large or analyses by strata are performed. We propose a simpler approach that uses an approximate normal distribution. This method is very easy to implement and is applicable to situations where the population sizes are large and the population distribution by important strata may differ by area. We demonstrate the approach on pediatric self-inflicted injury presentations to emergency departments and compare the results for probabilities based on the recursion and the normal approach. We also implement a Monte Carlo simulation to study the performance of the proposed approach. In a self-inflicted injury data example, the normal approach identifies twelve out of thirteen of the same clusters as the compound Poisson approach, noting that the compound Poisson method detects twelve significant clusters in total. Through simulation studies, the normal approach well approximates the compound Poisson approach for a variety of different population sizes and case and event thresholds. A drawback of the compound Poisson approach is that the relevant probabilities must be determined through a recursion relation and such calculations can be computationally intensive if the cluster size is relatively large or if analyses are conducted with strata variables. On the other hand, the normal approach is very flexible, easily implemented, and hence, more appealing for users. Moreover, the concepts may be more easily conveyed to non-statisticians interested in understanding the methodology associated with cluster detection test results.
Controls of earthquake faulting style on near field landslide triggering: The role of coseismic slip
NASA Astrophysics Data System (ADS)
Tatard, L.; Grasso, J. R.
2013-06-01
compare the spatial distributions of seven databases of landslides triggered by Mw=5.6-7.9 earthquakes, using distances normalized by the earthquake fault length. We show that the normalized landslide distance distributions collapse, i.e., the normalized distance distributions overlap whatever the size of the earthquake, separately for the events associated with dip-slip, buried-faulting earthquakes, and surface-faulting earthquakes. The dip-slip earthquakes triggered landslides at larger normalized distances than the oblique-slip event of Loma Prieta. We further identify that the surface-faulting earthquakes of Wenchuan, Chi-Chi, and Kashmir triggered landslides at normalized distances smaller than the ones expected from their Mw ≥ 7.6 magnitudes. These results support a control of the seismic slip (through amplitude, rake, and surface versus buried slip) on the distances at which landslides are triggered. In terms of coseismic landslide management in mountainous areas, our results allow us to propose distances at which 95 and 75% of landslides will be triggered as a function of the earthquake focal mechanism.
The missing impact craters on Venus
NASA Technical Reports Server (NTRS)
Speidel, D. H.
1993-01-01
The size-frequency pattern of the 842 impact craters on Venus measured to date can be well described (across four standard deviation units) as a single log normal distribution with a mean crater diameter of 14.5 km. This result was predicted in 1991 on examination of the initial Magellan analysis. If this observed distribution is close to the real distribution, the 'missing' 90 percent of the small craters and the 'anomalous' lack of surface splotches may thus be neither missing nor anomalous. I think that the missing craters and missing splotches can be satisfactorily explained by accepting that the observed distribution approximates the real one, that it is not craters that are missing but the impactors. What you see is what you got. The implication that Venus crossing impactors would have the same type of log normal distribution is consistent with recently described distribution for terrestrial craters and Earth crossing asteroids.
NASA Astrophysics Data System (ADS)
Blasi, Thomas; Buettner, Florian; Strasser, Michael K.; Marr, Carsten; Theis, Fabian J.
2017-06-01
Accessing gene expression at a single-cell level has unraveled often large heterogeneity among seemingly homogeneous cells, which remains obscured when using traditional population-based approaches. The computational analysis of single-cell transcriptomics data, however, still imposes unresolved challenges with respect to normalization, visualization and modeling the data. One such issue is differences in cell size, which introduce additional variability into the data and for which appropriate normalization techniques are needed. Otherwise, these differences in cell size may obscure genuine heterogeneities among cell populations and lead to overdispersed steady-state distributions of mRNA transcript numbers. We present cgCorrect, a statistical framework to correct for differences in cell size that are due to cell growth in single-cell transcriptomics data. We derive the probability for the cell-growth-corrected mRNA transcript number given the measured, cell size-dependent mRNA transcript number, based on the assumption that the average number of transcripts in a cell increases proportionally to the cell’s volume during the cell cycle. cgCorrect can be used for both data normalization and to analyze the steady-state distributions used to infer the gene expression mechanism. We demonstrate its applicability on both simulated data and single-cell quantitative real-time polymerase chain reaction (PCR) data from mouse blood stem and progenitor cells (and to quantitative single-cell RNA-sequencing data obtained from mouse embryonic stem cells). We show that correcting for differences in cell size affects the interpretation of the data obtained by typically performed computational analysis.
Simulation of the Focal Spot of the Accelerator Bremsstrahlung Radiation
NASA Astrophysics Data System (ADS)
Sorokin, V.; Bespalov, V.
2016-06-01
Testing of thick-walled objects by bremsstrahlung radiation (BR) is primarily performed via high-energy quanta. The testing parameters are specified by the focal spot size of the high-energy bremsstrahlung radiation. In determining the focal spot size, the high- energy BR portion cannot be experimentally separated from the low-energy BR to use high- energy quanta only. The patterns of BR focal spot formation have been investigated via statistical modeling of the radiation transfer in the target material. The distributions of BR quanta emitted by the target for different energies and emission angles under normal distribution of the accelerated electrons bombarding the target have been obtained, and the ratio of the distribution parameters has been determined.
The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
ERIC Educational Resources Information Center
Neel, John H.; Stallings, William M.
An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…
ERIC Educational Resources Information Center
Vasu, Ellen S.; Elmore, Patricia B.
The effects of the violation of the assumption of normality coupled with the condition of multicollinearity upon the outcome of testing the hypothesis Beta equals zero in the two-predictor regression equation is investigated. A monte carlo approach was utilized in which three differenct distributions were sampled for two sample sizes over…
Reliable and More Powerful Methods for Power Analysis in Structural Equation Modeling
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun
2017-01-01
The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…
Particle Size Reduction in Geophysical Granular Flows: The Role of Rock Fragmentation
NASA Astrophysics Data System (ADS)
Bianchi, G.; Sklar, L. S.
2016-12-01
Particle size reduction in geophysical granular flows is caused by abrasion and fragmentation, and can affect transport dynamics by altering the particle size distribution. While the Sternberg equation is commonly used to predict the mean abrasion rate in the fluvial environment, and can also be applied to geophysical granular flows, predicting the evolution of the particle size distribution requires a better understanding the controls on the rate of fragmentation and the size distribution of resulting particle fragments. To address this knowledge gap we are using single-particle free-fall experiments to test for the influence of particle size, impact velocity, and rock properties on fragmentation and abrasion rates. Rock types tested include granodiorite, basalt, and serpentinite. Initial particle masses and drop heights range from 20 to 1000 grams and 0.1 to 3.0 meters respectively. Preliminary results of free-fall experiments suggest that the probability of fragmentation varies as a power function of kinetic energy on impact. The resulting size distributions of rock fragments can be collapsed by normalizing by initial particle mass, and can be fit with a generalized Pareto distribution. We apply the free-fall results to understand the evolution of granodiorite particle-size distributions in granular flow experiments using rotating drums ranging in diameter from 0.2 to 4.0 meters. In the drums, we find that the rates of silt production by abrasion and gravel production by fragmentation scale with drum size. To compare these rates with free-fall results we estimate the particle impact frequency and velocity. We then use population balance equations to model the evolution of particle size distributions due to the combined effects of abrasion and fragmentation. Finally, we use the free-fall and drum experimental results to model particle size evolution in Inyo Creek, a steep, debris-flow dominated catchment, and compare model results to field measurements.
Effect of rapid thermal annealing temperature on the dispersion of Si nanocrystals in SiO2 matrix
NASA Astrophysics Data System (ADS)
Saxena, Nupur; Kumar, Pragati; Gupta, Vinay
2015-05-01
Effect of rapid thermal annealing temperature on the dispersion of silicon nanocrystals (Si-NC's) embedded in SiO2 matrix grown by atom beam sputtering (ABS) method is reported. The dispersion of Si NCs in SiO2 is an important issue to fabricate high efficiency devices based on Si-NC's. The transmission electron microscopy studies reveal that the precipitation of excess silicon is almost uniform and the particles grow in almost uniform size upto 850 °C. The size distribution of the particles broadens and becomes bimodal as the temperature is increased to 950 °C. This suggests that by controlling the annealing temperature, the dispersion of Si-NC's can be controlled. The results are supported by selected area diffraction (SAED) studies and micro photoluminescence (PL) spectroscopy. The discussion of effect of particle size distribution on PL spectrum is presented based on tight binding approximation (TBA) method using Gaussian and log-normal distribution of particles. The study suggests that the dispersion and consequently emission energy varies as a function of particle size distribution and that can be controlled by annealing parameters.
The Impact of Heterogeneous Thresholds on Social Contagion with Multiple Initiators
Karampourniotis, Panagiotis D.; Sreenivasan, Sameet; Szymanski, Boleslaw K.; Korniss, Gyorgy
2015-01-01
The threshold model is a simple but classic model of contagion spreading in complex social systems. To capture the complex nature of social influencing we investigate numerically and analytically the transition in the behavior of threshold-limited cascades in the presence of multiple initiators as the distribution of thresholds is varied between the two extreme cases of identical thresholds and a uniform distribution. We accomplish this by employing a truncated normal distribution of the nodes’ thresholds and observe a non-monotonic change in the cascade size as we vary the standard deviation. Further, for a sufficiently large spread in the threshold distribution, the tipping-point behavior of the social influencing process disappears and is replaced by a smooth crossover governed by the size of initiator set. We demonstrate that for a given size of the initiator set, there is a specific variance of the threshold distribution for which an opinion spreads optimally. Furthermore, in the case of synthetic graphs we show that the spread asymptotically becomes independent of the system size, and that global cascades can arise just by the addition of a single node to the initiator set. PMID:26571486
Azéma, Emilien; Linero, Sandra; Estrada, Nicolas; Lizcano, Arcesio
2017-08-01
By means of extensive contact dynamics simulations, we analyzed the effect of particle size distribution (PSD) on the strength and microstructure of sheared granular materials composed of frictional disks. The PSDs are built by means of a normalized β function, which allows the systematic investigation of the effects of both, the size span (from almost monodisperse to highly polydisperse) and the shape of the PSD (from linear to pronouncedly curved). We show that the shear strength is independent of the size span, which substantiates previous results obtained for uniform distributions by packing fraction. Notably, the shear strength is also independent of the shape of the PSD, as shown previously for systems composed of frictionless disks. In contrast, the packing fraction increases with the size span, but decreases with more pronounced PSD curvature. At the microscale, we analyzed the connectivity and anisotropies of the contacts and forces networks. We show that the invariance of the shear strength with the PSD is due to a compensation mechanism which involves both geometrical sources of anisotropy. In particular, contact orientation anisotropy decreases with the size span and increases with PSD curvature, while the branch length anisotropy behaves inversely.
Analytical YORP torques model with an improved temperature distribution function
NASA Astrophysics Data System (ADS)
Breiter, S.; Vokrouhlický, D.; Nesvorný, D.
2010-01-01
Previous models of the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect relied either on the zero thermal conductivity assumption, or on the solutions of the heat conduction equations assuming an infinite body size. We present the first YORP solution accounting for a finite size and non-radial direction of the surface normal vectors in the temperature distribution. The new thermal model implies the dependence of the YORP effect in rotation rate on asteroids conductivity. It is shown that the effect on small objects does not scale as the inverse square of diameter, but rather as the first power of the inverse.
Seet, Katrina Y T; Nieminen, Timo A; Zvyagin, Andrei V
2009-01-01
The cell nucleus is the dominant optical scatterer in the cell. Neoplastic cells are characterized by cell nucleus polymorphism and polychromism-i.e., the nuclei exhibits an increase in the distribution of both size and refractive index. The relative size parameter, and its distribution, is proportional to the product of the nucleus size and its relative refractive index and is a useful discriminant between normal and abnormal (cancerous) cells. We demonstrate a recently introduced holographic technique, digital Fourier microscopy (DFM), to provide a sensitive measure of this relative size parameter. Fourier holograms were recorded and optical scatter of individual scatterers were extracted and modeled with Mie theory to determine the relative size parameter. The relative size parameter of individual melanocyte cell nuclei were found to be 16.5+/-0.2, which gives a cell nucleus refractive index of 1.38+/-0.01 and is in good agreement with previously reported data. The relative size parameters of individual malignant melanocyte cell nuclei are expected to be greater than 16.5.
Statistical characterization of a large geochemical database and effect of sample size
Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.
2005-01-01
The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.
Simulation of the influence of aerosol particles on Stokes parameters of polarized skylight
NASA Astrophysics Data System (ADS)
Li, L.; Li, Z. Q.; Wendisch, M.
2014-03-01
Microphysical properties and chemical compositions of aerosol particles determine polarized radiance distribution in the atmosphere. In this paper, the influences of different aerosol properties (particle size, shape, real and imaginary parts of refractive index) on Stokes parameters of polarized skylight in the solar principal and almucantar planes are studied by using vector radiative transfer simulations. The results show high sensitivity of the normalized Stokes parameters due to fine particle size, shape and real part of refractive index of aerosols. It is possible to utilize the strength variations at the peak positions of the normalized Stokes parameters in the principal and almucantar planes to identify aerosol types.
Besford, Quinn Alexander; Zeng, Xiao-Yi; Ye, Ji-Ming; Gray-Weale, Angus
2016-02-01
Glycogen is a vital highly branched polymer of glucose that is essential for blood glucose homeostasis. In this article, the structure of liver glycogen from mice is investigated with respect to size distributions, degradation kinetics, and branching structure, complemented by a comparison of normal and diabetic liver glycogen. This is done to screen for differences that may result from disease. Glycogen α-particle (diameter ∼ 150 nm) and β-particle (diameter ∼ 25 nm) size distributions are reported, along with in vitro γ-amylase degradation experiments, and a small angle X-ray scattering analysis of mouse β-particles. Type 2 diabetic liver glycogen upon extraction was found to be present as large loosely bound, aggregates, not present in normal livers. Liver glycogen was found to aggregate in vitro over a period of 20 h, and particle size is shown to be related to rate of glucose release, allowing a structure-function relationship to be inferred for the tissue specific distribution of particle types. Application of branching theories to small angle X-ray scattering data for mouse β-particles revealed these particles to be randomly branched polymers, not fractal polymers. Together, this article shows that type 2 diabetic liver glycogen is present as large aggregates in mice, which may contribute to the inflexibility of interconversion between glucose and glycogen in type 2 diabetes, and further that glycogen particles are randomly branched with a size that is related to the rate of glucose release.
Modeling pore corrosion in normally open gold- plated copper connectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien
2008-09-01
The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less
Size segregation in a granular bore
NASA Astrophysics Data System (ADS)
Edwards, A. N.; Vriend, N. M.
2016-10-01
We investigate the effect of particle-size segregation in an upslope propagating granular bore. A bidisperse mixture of particles, initially normally graded, flows down an inclined chute and impacts with a closed end. This impact causes the formation of a shock in flow thickness, known as a granular bore, to travel upslope, leaving behind a thick deposit. This deposit imprints the local segregated state featuring both pure and mixed regions of particles as a function of downstream position. The particle-size distribution through the depth is characterized by a thin purely small-particle layer at the base, a significant linear transition region, and a thick constant mixed-particle layer below the surface, in contrast to previously observed S-shaped steady-state concentration profiles. The experimental observations agree with recent progress that upward and downward segregation of large and small particles respectively is asymmetric. We incorporate the three-layer, experimentally observed, size-distribution profile into a depth-averaged segregation model to modify it accordingly. Numerical solutions of this model are able to match our experimental results and therefore motivate the use of a more general particle-size distribution profile.
Robustness of the far-field response of nonlocal plasmonic ensembles.
Tserkezis, Christos; Maack, Johan R; Liu, Zhaowei; Wubs, Martijn; Mortensen, N Asger
2016-06-22
Contrary to classical predictions, the optical response of few-nm plasmonic particles depends on particle size due to effects such as nonlocality and electron spill-out. Ensembles of such nanoparticles are therefore expected to exhibit a nonclassical inhomogeneous spectral broadening due to size distribution. For a normal distribution of free-electron nanoparticles, and within the simple nonlocal hydrodynamic Drude model, both the nonlocal blueshift and the plasmon linewidth are shown to be considerably affected by ensemble averaging. Size-variance effects tend however to conceal nonlocality to a lesser extent when the homogeneous size-dependent broadening of individual nanoparticles is taken into account, either through a local size-dependent damping model or through the Generalized Nonlocal Optical Response theory. The role of ensemble averaging is further explored in realistic distributions of isolated or weakly-interacting noble-metal nanoparticles, as encountered in experiments, while an analytical expression to evaluate the importance of inhomogeneous broadening through measurable quantities is developed. Our findings are independent of the specific nonclassical theory used, thus providing important insight into a large range of experiments on nanoscale and quantum plasmonics.
Reciprocal-space mapping of epitaxic thin films with crystallite size and shape polydispersity.
Boulle, A; Conchon, F; Guinebretière, R
2006-01-01
A development is presented that allows the simulation of reciprocal-space maps (RSMs) of epitaxic thin films exhibiting fluctuations in the size and shape of the crystalline domains over which diffraction is coherent (crystallites). Three different crystallite shapes are studied, namely parallelepipeds, trigonal prisms and hexagonal prisms. For each shape, two cases are considered. Firstly, the overall size is allowed to vary but with a fixed thickness/width ratio. Secondly, the thickness and width are allowed to vary independently. The calculations are performed assuming three different size probability density functions: the normal distribution, the lognormal distribution and a general histogram distribution. In all cases considered, the computation of the RSM only requires a two-dimensional Fourier integral and the integrand has a simple analytical expression, i.e. there is no significant increase in computing times by taking size and shape fluctuations into account. The approach presented is compatible with most lattice disorder models (dislocations, inclusions, mosaicity, ...) and allows a straightforward account of the instrumental resolution. The applicability of the model is illustrated with the case of an yttria-stabilized zirconia film grown on sapphire.
Fernando, M Rohan; Jiang, Chao; Krzyzanowski, Gary D; Ryan, Wayne L
2018-04-12
Plasma cell-free DNA (cfDNA) fragment size distribution provides important information required for diagnostic assay development. We have developed and optimized droplet digital PCR (ddPCR) assays that quantify short and long DNA fragments. These assays were used to analyze plasma cfDNA fragment size distribution in human blood. Assays were designed to amplify 76,135, 490 and 905 base pair fragments of human β-actin gene. These assays were used for fragment size analysis of plasma cell-free, exosome and apoptotic body DNA obtained from normal and pregnant donors. The relative percentages for 76, 135, 490 and 905 bp fragments from non-pregnant plasma and exosome DNA were 100%, 39%, 18%, 5.6% and 100%, 40%, 18%,3.3%, respectively. The relative percentages for pregnant plasma and exosome DNA were 100%, 34%, 14%, 23%, and 100%, 30%, 12%, 18%, respectively. The relative percentages for non-pregnant plasma pellet (obtained after 2nd centrifugation step) were 100%, 100%, 87% and 83%, respectively. Non-pregnant Plasma cell-free and exosome DNA share a unique fragment distribution pattern which is different from pregnant donor plasma and exosome DNA fragment distribution indicating the effect of physiological status on cfDNA fragment size distribution. Fragment distribution pattern for plasma pellet that includes apoptotic bodies and nuclear DNA was greatly different from plasma cell-free and exosome DNA. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank
2018-02-12
Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in signal-to-noise ratio by 50% and the statistical/detection sensitivity by as high as 30% regardless of the downstream mapping and normalization methods. Most importantly, the power-law correction improves concordance in significant calls among different normalization methods of a data series averagely by 22%. When presented with a higher sequence depth (4 times difference), the improvement in concordance is asymmetrical (32% for the higher sequencing depth instance versus 13% for the lower instance) and demonstrates that the simple power-law correction can increase significant detection with higher sequencing depths. Finally, the correction dramatically enhances the statistical conclusions and eludes the metastasis potential of the NUGC3 cell line against AGS of our dilution analysis. The finite-size effects due to undersampling generally plagues transcript count data with reproducibility issues but can be minimized through a simple power-law correction of the count distribution. This distribution correction has direct implication on the biological interpretation of the study and the rigor of the scientific findings. This article was reviewed by Oliviero Carugo, Thomas Dandekar and Sandor Pongor.
Dimensions of stabident intraosseous perforators and needles.
Ramlee, R A; Whitworth, J
2001-09-01
Problems can be encountered inserting intraosseous injection needles through perforation sites. This in vitro study examined the variability and size compatibility of Stabident intraosseous injection components. The diameters of 40 needles and perforators from a single Stabident kit were measured in triplicate with a toolmakers microscope. One-way ANOVA revealed that mean needle diameter (0.411 mm) was significantly narrower than mean perforator diameter (0.427 mm) (p < 0.001). A frequency distribution plot revealed that needle diameter followed a normal distribution, indicating tight quality control during manufacture. The diameter of perforators was haphazardly distributed, with a clustering of 15% at the lower limit of the size range. However on no occasion was the diameter of a perforator smaller than that of an injection needle. We conclude that components of the Stabident intraosseous anaesthetic system are size-compatible, but there is greater and more haphazard variability in the diameter of perforators than injection needles.
Influence of ambient air pressure on effervescent atomization
NASA Technical Reports Server (NTRS)
Chen, S. K.; Lefebvre, A. H.; Rollbuhler, J.
1993-01-01
The influence of ambient air pressure on the drop-size distributions produced in effervescent atomization is examined in this article. Also investigated are the effects on spray characteristics of variations in air/liquid mass ratio, liquid-injection pressure, and atomizer discharge-orifice diameter at different levels of ambient air pressure. It is found that continuous increase in air pressure above the normal atmospheric value causes the mean drop-size to first increase up to a maximum value and then decline. An explanation for this characteristic is provided in terms of the various contributing factors to the overall atomization process. It is also observed that changes in atomizer geometry and operating conditions have little effect on the distribution of drop-sizes in the spray.
Log-normal spray drop distribution...analyzed by two new computer programs
Gerald S. Walton
1968-01-01
Results of U.S. Forest Service research on chemical insecticides suggest that large drops are not as effective as small drops in carrying insecticides to target insects. Two new computer programs have been written to analyze size distribution properties of drops from spray nozzles. Coded in Fortran IV, the programs have been tested on both the CDC 6400 and the IBM 7094...
Planar Laser Imaging of Sprays for Liquid Rocket Studies
NASA Technical Reports Server (NTRS)
Lee, W.; Pal, S.; Ryan, H. M.; Strakey, P. A.; Santoro, Robert J.
1990-01-01
A planar laser imaging technique which incorporates an optical polarization ratio technique for droplet size measurement was studied. A series of pressure atomized water sprays were studied with this technique and compared with measurements obtained using a Phase Doppler Particle Analyzer. In particular, the effects of assuming a logarithmic normal distribution function for the droplet size distribution within a spray was evaluated. Reasonable agreement between the instrument was obtained for the geometric mean diameter of the droplet distribution. However, comparisons based on the Sauter mean diameter show larger discrepancies, essentially because of uncertainties in the appropriate standard deviation to be applied for the polarization ratio technique. Comparisons were also made between single laser pulse (temporally resolved) measurements with multiple laser pulse visualizations of the spray.
Phenomenological characteristic of the electron component in gamma-quanta initiated showers
NASA Technical Reports Server (NTRS)
Nikolsky, S. I.; Stamenov, J. N.; Ushev, S. Z.
1985-01-01
The phenomenological characteristics of the electron component in showers initiated by primary gamma-quanta were analyzed on the basis of the Tien Shan experimental data. It is shown that the lateral distribution of the electrons ion gamma-quanta initiated showers can be described with NKG - function with age parameters bar S equals 0, 76 plus or minus 0, 02, different from the same parameter for normal showers with the same size bar S equals 0, 85 plus or minus 0, 01. The lateral distribution of the correspondent electron energy flux in gamma-quanta initiated showers is steeper as in normal cosmic ray showers.
NASA Astrophysics Data System (ADS)
Betha, Raghu; Zhang, Zhe; Balasubramanian, Rajasekhar
2014-08-01
Submicron particle number concentration (PNC) and particle size distribution (PSD) in the size range of 5.6-560 nm were investigated in Singapore from 27 June 2009 through 6 September 2009. Slightly hazy conditions lasted in Singapore from 6 to 10 August. Backward air trajectories indicated that the haze was due to the transport of biomass burning impacted air masses originating from wild forest and peat fires in Sumatra, Indonesia. Three distinct peaks in the morning (08:00-10:00), afternoon (13:00-15:00) and evening (16:00-20:00) were observed on a typical normal day. However, during the haze period no distinct morning and afternoon peaks were observed and the PNC (39,775 ± 3741 cm-3) increased by 1.5 times when compared to that during non-haze periods (26,462 ± 6017). The morning and afternoon peaks on the normal day were associated with the local rush hour traffic while the afternoon peak was induced by new particle formation (NPF). Diurnal profiles of PNCs and PSDs showed that primary particle peak diameters were large during the haze (60 nm) period when compared to that during the non-haze period (45.3 nm). NPF events observed in the afternoon period on normal days were suppressed during the haze periods due to heavy particle loading in atmosphere caused by biomass burning impacted air masses.
Yang, Show-Yi; Lin, Jia-Ming; Young, Li-Hao; Chang, Ching-Wen
2018-04-07
We investigate exposure to welding fume metals in pipeline construction, which are responsible for severe respiratory problems. We analyzed air samples obtained using size-fractioning cascade impactors that were attached to the welders performing shielded metal and gas tungsten arc welding outdoors. Iron, aluminum, zinc, chromium, manganese, copper, nickel, and lead concentrations in the water-soluble (WS) and water-insoluble (WI) portions were determined separately, using inductively coupled plasma mass spectrometry. The mass-size distribution of welding fume matches a log-normal distribution with two modes. The metal concentrations in the welding fume were ranked as follows: Fe > Al > Zn > Cr > Mn > Ni > Cu > Pb. In the WS portion, the capacities of metals dissolving in water are correlated with the metal species but particle sizes. Particularly, Zn, Mn, and Pb exhibit relatively higher capacities than Cu, Cr, Al, Fe, and Ni. Exposure of the gas-exchange region of the lungs to WS metals were in the range of 4.9% to 34.6% of the corresponding metals in air by considering the particle-size selection in lungs, metal composition by particle size, and the capacities of each metal dissolving in water.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
2009-09-01
Large, Medium-speed, Roll-on/Roll-off Ships T- AKR ,” 2009) The ships can support humanitarian missions as well. LMSRs normally have a crew size of 26...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited A MANPOWER...COMPARISON OF THREE U. S. NAVIES: THE CURRENT FLEET, A PROJECTED 313 SHIP FLEET, AND A MORE DISTRIBUTED BIMODAL ALTERNATIVE by Juan L. Carrasco
Evaluation of sampling plans to detect Cry9C protein in corn flour and meal.
Whitaker, Thomas B; Trucksess, Mary W; Giesbrecht, Francis G; Slate, Andrew B; Thomas, Francis S
2004-01-01
StarLink is a genetically modified corn that produces an insecticidal protein, Cry9C. Studies were conducted to determine the variability and Cry9C distribution among sample test results when Cry9C protein was estimated in a bulk lot of corn flour and meal. Emphasis was placed on measuring sampling and analytical variances associated with each step of the test procedure used to measure Cry9C in corn flour and meal. Two commercially available enzyme-linked immunosorbent assay kits were used: one for the determination of Cry9C protein concentration and the other for % StarLink seed. The sampling and analytical variances associated with each step of the Cry9C test procedures were determined for flour and meal. Variances were found to be functions of Cry9C concentration, and regression equations were developed to describe the relationships. Because of the larger particle size, sampling variability associated with cornmeal was about double that for corn flour. For cornmeal, the sampling variance accounted for 92.6% of the total testing variability. The observed sampling and analytical distributions were compared with the Normal distribution. In almost all comparisons, the null hypothesis that the Cry9C protein values were sampled from a Normal distribution could not be rejected at 95% confidence limits. The Normal distribution and the variance estimates were used to evaluate the performance of several Cry9C protein sampling plans for corn flour and meal. Operating characteristic curves were developed and used to demonstrate the effect of increasing sample size on reducing false positives (seller's risk) and false negatives (buyer's risk).
Codron, Daryl; Carbone, Chris; Clauss, Marcus
2013-01-01
Because egg-laying meant that even the largest dinosaurs gave birth to very small offspring, they had to pass through multiple ontogenetic life stages to adulthood. Dinosaurs’ successors as the dominant terrestrial vertebrate life form, the mammals, give birth to live young, and have much larger offspring and less complex ontogenetic histories. The larger number of juveniles in dinosaur as compared to mammal ecosystems represents both a greater diversity of food available to predators, and competitors for similar-sized individuals of sympatric species. Models of population abundances across different-sized species of dinosaurs and mammals, based on simulated ecological life tables, are employed to investigate how differences in predation and competition pressure influenced dinosaur communities. Higher small- to medium-sized prey availability leads to a normal body mass-species richness (M-S) distribution of carnivorous dinosaurs (as found in the theropod fossil record), in contrast to the right-skewed M-S distribution of carnivorous mammals (as found living members of the order Carnivora). Higher levels of interspecific competition leads to a left-skewed M-S distribution in herbivorous dinosaurs (as found in sauropods and ornithopods), in contrast to the normal M-S distribution of large herbivorous mammals. Thus, our models suggest that differences in reproductive strategy, and consequently ontogeny, explain observed differences in community structure between dinosaur and mammal faunas. Models also show that the largest dinosaurian predators could have subsisted on similar-sized prey by including younger life stages of the largest herbivore species, but that large predators likely avoided prey much smaller than themselves because, despite predicted higher abundances of smaller than larger-bodied prey, contributions of small prey to biomass intake would be insufficient to satisfy meat requirements. A lack of large carnivores feeding on small prey exists in mammals larger than 21.5 kg, and it seems a similar minimum prey-size threshold could have affected dinosaurs as well. PMID:24204749
Codron, Daryl; Carbone, Chris; Clauss, Marcus
2013-01-01
Because egg-laying meant that even the largest dinosaurs gave birth to very small offspring, they had to pass through multiple ontogenetic life stages to adulthood. Dinosaurs' successors as the dominant terrestrial vertebrate life form, the mammals, give birth to live young, and have much larger offspring and less complex ontogenetic histories. The larger number of juveniles in dinosaur as compared to mammal ecosystems represents both a greater diversity of food available to predators, and competitors for similar-sized individuals of sympatric species. Models of population abundances across different-sized species of dinosaurs and mammals, based on simulated ecological life tables, are employed to investigate how differences in predation and competition pressure influenced dinosaur communities. Higher small- to medium-sized prey availability leads to a normal body mass-species richness (M-S) distribution of carnivorous dinosaurs (as found in the theropod fossil record), in contrast to the right-skewed M-S distribution of carnivorous mammals (as found living members of the order Carnivora). Higher levels of interspecific competition leads to a left-skewed M-S distribution in herbivorous dinosaurs (as found in sauropods and ornithopods), in contrast to the normal M-S distribution of large herbivorous mammals. Thus, our models suggest that differences in reproductive strategy, and consequently ontogeny, explain observed differences in community structure between dinosaur and mammal faunas. Models also show that the largest dinosaurian predators could have subsisted on similar-sized prey by including younger life stages of the largest herbivore species, but that large predators likely avoided prey much smaller than themselves because, despite predicted higher abundances of smaller than larger-bodied prey, contributions of small prey to biomass intake would be insufficient to satisfy meat requirements. A lack of large carnivores feeding on small prey exists in mammals larger than 21.5 kg, and it seems a similar minimum prey-size threshold could have affected dinosaurs as well.
Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael
2016-01-01
Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.
NASA Astrophysics Data System (ADS)
Celli, Jonathan P.; Rizvi, Imran; Evans, Conor L.; Abu-Yousif, Adnan O.; Hasan, Tayyaba
2010-09-01
Three-dimensional tumor models have emerged as valuable in vitro research tools, though the power of such systems as quantitative reporters of tumor growth and treatment response has not been adequately explored. We introduce an approach combining a 3-D model of disseminated ovarian cancer with high-throughput processing of image data for quantification of growth characteristics and cytotoxic response. We developed custom MATLAB routines to analyze longitudinally acquired dark-field microscopy images containing thousands of 3-D nodules. These data reveal a reproducible bimodal log-normal size distribution. Growth behavior is driven by migration and assembly, causing an exponential decay in spatial density concomitant with increasing mean size. At day 10, cultures are treated with either carboplatin or photodynamic therapy (PDT). We quantify size-dependent cytotoxic response for each treatment on a nodule by nodule basis using automated segmentation combined with ratiometric batch-processing of calcein and ethidium bromide fluorescence intensity data (indicating live and dead cells, respectively). Both treatments reduce viability, though carboplatin leaves micronodules largely structurally intact with a size distribution similar to untreated cultures. In contrast, PDT treatment disrupts micronodular structure, causing punctate regions of toxicity, shifting the distribution toward smaller sizes, and potentially increasing vulnerability to subsequent chemotherapeutic treatment.
Computational studies of photoluminescence from disordered nanocrystalline systems
NASA Astrophysics Data System (ADS)
John, George
2000-03-01
The size (d) dependence of emission energies from semiconductor nanocrystallites have been shown to follow an effective exponent ( d^-β) determined by the disorder in the system(V.Ranjan, V.A.Singh and G.C.John, Phys. Rev B 58), 1158 (1998). Our earlier calculation was based on a simple quantum confinement model assuming a normal distribution of crystallites. This model is now extended to study the effects of realistic systems with a lognormal distribution in particle size, accounting for carrier hopping and nonradiative transitions. Computer simulations of this model performed using the Microcal Origin software can explain several conflicting experimental results reported in literature.
Statistical properties of the normalized ice particle size distribution
NASA Astrophysics Data System (ADS)
Delanoë, Julien; Protat, Alain; Testud, Jacques; Bouniol, Dominique; Heymsfield, A. J.; Bansemer, A.; Brown, P. R. A.; Forbes, R. M.
2005-05-01
Testud et al. (2001) have recently developed a formalism, known as the "normalized particle size distribution (PSD)", which consists in scaling the diameter and concentration axes in such a way that the normalized PSDs are independent of water content and mean volume-weighted diameter. In this paper we investigate the statistical properties of the normalized PSD for the particular case of ice clouds, which are known to play a crucial role in the Earth's radiation balance. To do so, an extensive database of airborne in situ microphysical measurements has been constructed. A remarkable stability in shape of the normalized PSD is obtained. The impact of using a single analytical shape to represent all PSDs in the database is estimated through an error analysis on the instrumental (radar reflectivity and attenuation) and cloud (ice water content, effective radius, terminal fall velocity of ice crystals, visible extinction) properties. This resulted in a roughly unbiased estimate of the instrumental and cloud parameters, with small standard deviations ranging from 5 to 12%. This error is found to be roughly independent of the temperature range. This stability in shape and its single analytical approximation implies that two parameters are now sufficient to describe any normalized PSD in ice clouds: the intercept parameter N*0 and the mean volume-weighted diameter Dm. Statistical relationships (parameterizations) between N*0 and Dm have then been evaluated in order to reduce again the number of unknowns. It has been shown that a parameterization of N*0 and Dm by temperature could not be envisaged to retrieve the cloud parameters. Nevertheless, Dm-T and mean maximum dimension diameter -T parameterizations have been derived and compared to the parameterization of Kristjánsson et al. (2000) currently used to characterize particle size in climate models. The new parameterization generally produces larger particle sizes at any temperature than the Kristjánsson et al. (2000) parameterization. These new parameterizations are believed to better represent particle size at global scale, owing to a better representativity of the in situ microphysical database used to derive it. We then evaluated the potential of a direct N*0-Dm relationship. While the model parameterized by temperature produces strong errors on the cloud parameters, the N*0-Dm model parameterized by radar reflectivity produces accurate cloud parameters (less than 3% bias and 16% standard deviation). This result implies that the cloud parameters can be estimated from the estimate of only one parameter of the normalized PSD (N*0 or Dm) and a radar reflectivity measurement.
SUPERNOVA DRIVING. III. SYNTHETIC MOLECULAR CLOUD OBSERVATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padoan, Paolo; Juvela, Mika; Pan, Liubin
We present a comparison of molecular clouds (MCs) from a simulation of supernova (SN) driven interstellar medium (ISM) turbulence with real MCs from the Outer Galaxy Survey. The radiative transfer calculations to compute synthetic CO spectra are carried out assuming that the CO relative abundance depends only on gas density, according to four different models. Synthetic MCs are selected above a threshold brightness temperature value, T {sub B,min} = 1.4 K, of the J = 1 − 0 {sup 12}CO line, generating 16 synthetic catalogs (four different spatial resolutions and four CO abundance models), each containing up to several thousandsmore » MCs. The comparison with the observations focuses on the mass and size distributions and on the velocity–size and mass–size Larson relations. The mass and size distributions are found to be consistent with the observations, with no significant variations with spatial resolution or chemical model, except in the case of the unrealistic model with constant CO abundance. The velocity–size relation is slightly too steep for some of the models, while the mass–size relation is a bit too shallow for all models only at a spatial resolution dx ≈ 1 pc. The normalizations of the Larson relations show a clear dependence on spatial resolution, for both the synthetic and the real MCs. The comparison of the velocity–size normalization suggests that the SN rate in the Perseus arm is approximately 70% or less of the rate adopted in the simulation. Overall, the realistic properties of the synthetic clouds confirm that SN-driven turbulence can explain the origin and dynamics of MCs.« less
Supernova Driving. III. Synthetic Molecular Cloud Observations
NASA Astrophysics Data System (ADS)
Padoan, Paolo; Juvela, Mika; Pan, Liubin; Haugbølle, Troels; Nordlund, Åke
2016-08-01
We present a comparison of molecular clouds (MCs) from a simulation of supernova (SN) driven interstellar medium (ISM) turbulence with real MCs from the Outer Galaxy Survey. The radiative transfer calculations to compute synthetic CO spectra are carried out assuming that the CO relative abundance depends only on gas density, according to four different models. Synthetic MCs are selected above a threshold brightness temperature value, T B,min = 1.4 K, of the J = 1 - 0 12CO line, generating 16 synthetic catalogs (four different spatial resolutions and four CO abundance models), each containing up to several thousands MCs. The comparison with the observations focuses on the mass and size distributions and on the velocity-size and mass-size Larson relations. The mass and size distributions are found to be consistent with the observations, with no significant variations with spatial resolution or chemical model, except in the case of the unrealistic model with constant CO abundance. The velocity-size relation is slightly too steep for some of the models, while the mass-size relation is a bit too shallow for all models only at a spatial resolution dx ≈ 1 pc. The normalizations of the Larson relations show a clear dependence on spatial resolution, for both the synthetic and the real MCs. The comparison of the velocity-size normalization suggests that the SN rate in the Perseus arm is approximately 70% or less of the rate adopted in the simulation. Overall, the realistic properties of the synthetic clouds confirm that SN-driven turbulence can explain the origin and dynamics of MCs.
Suppression of nucleation mode particles by biomass burning in an urban environment: a case study.
Agus, Emily L; Lingard, Justin J N; Tomlin, Alison S
2008-08-01
Measurements of concentrations and size distributions of particles 4.7 to 160 nm were taken using an SMPS during the bonfire and firework celebrations on Bonfire Night in Leeds, UK, 2006. These celebrations provided an opportunity to study size distributions in a unique atmospheric pollution situation during and following a significant emission event due to open biomass burning. A log-normal fitting program was used to determine the characteristics of the modal groups present within hourly averaged size distributions. Results from the modal fitting showed that on bonfire night the smallest nucleation mode, which was present before and after the bonfire event and on comparison weekends, was not detected within the size distribution. In addition, there was a significant shift in the modal diameters of the remaining modes during the peak of the pollution event. Using the concept of a coagulation sink, the atmospheric lifetimes of smaller particles were significantly reduced during the pollution event, and thus were used to explain the disappearance of the smallest nucleation mode as well as changes in particle count mean diameters. The significance for particle mixing state is discussed.
Particle size-dependent organ distribution of gold nanoparticles after intravenous administration.
De Jong, Wim H; Hagens, Werner I; Krystek, Petra; Burger, Marina C; Sips, Adriënne J A M; Geertsma, Robert E
2008-04-01
A kinetic study was performed to determine the influence of particle size on the in vivo tissue distribution of spherical-shaped gold nanoparticles in the rat. Gold nanoparticles were chosen as model substances as they are used in several medical applications. In addition, the detection of the presence of gold is feasible with no background levels in the body in the normal situation. Rats were intravenously injected in the tail vein with gold nanoparticles with a diameter of 10, 50, 100 and 250 nm, respectively. After 24 h, the rats were sacrificed and blood and various organs were collected for gold determination. The presence of gold was measured quantitatively with inductively coupled plasma mass spectrometry (ICP-MS). For all gold nanoparticle sizes the majority of the gold was demonstrated to be present in liver and spleen. A clear difference was observed between the distribution of the 10 nm particles and the larger particles. The 10 nm particles were present in various organ systems including blood, liver, spleen, kidney, testis, thymus, heart, lung and brain, whereas the larger particles were only detected in blood, liver and spleen. The results demonstrate that tissue distribution of gold nanoparticles is size-dependent with the smallest 10nm nanoparticles showing the most widespread organ distribution.
A study on the trinucleotide repeat associated with Huntington`s disease in the Chinese
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bing-wen Soong; Jih-tsuu Wang
1994-09-01
Analysis of the polymorphic (CAG)n repeat in the hungingtin gene in the chinese confirmed the presence of an expanded repeat on all Huntington`s disease chromosomes. Measurement of the specific CAG repeat sequence in 34 HD chromosomes from 15 unrelated families and 190 control chromosomes from the Chinese population showed a range from 9 to 29 repeats in normal subjects and 40 to 58 in affected subjects. The size distributions of normal and affected alleles did not overlap. A clear correlation bewteen early onset of symptoms and very high repeat number was seen, but the spread of the age-at-onset in themore » major repeat range producing characteristic HD it too wide to be of diagnostic value. There was also variability in the transmitted repeat size for both sexes in the HD size range. Maternal HD alleles showed a moderate instability with a preponderance of size decrease, while paternal HD alleles had a tendency to increase in repeat size on transmission, the degree of which appeared proportional to the initial size.« less
Chetviverikova, E P; Iashina, S G; Shabaeva, E V; Egorova, E F; Iashina, A V
2005-01-01
The effect of deep freezing of seeds at -196 degrees C (-320.8 degrees Fahrenheit) and inbreeding on the morphological characteristics of the evening-primrose biennal (Oenothera biennis L.), such as the size of plant parts and the amount of fruits, cauline nodes, and generative and vegetative shoots was investigated. The variation coefficients for these characteristics after treatment with low temperatures and inbreeding were calculated. It was shown that the characteristics of plant size show a low and a middle level of variability in the control group. The variation curves for these characteristics are similar to normal distribution curves. After stresses they slightly change or remain invariant. Large adventive shoots show a high level of variability. The distribution of the results in this case significantly differs from the normal. The branching of plants changes after both stress factors: the amount of all kinds of shoots decreases by half or even more.
Triboelectric charging of volcanic ash from the 2011 Grímsvötn eruption.
Houghton, Isobel M P; Aplin, Karen L; Nicoll, Keri A
2013-09-13
The plume from the 2011 eruption of Grímsvötn was highly electrically charged, as shown by the considerable lightning activity measured by the United Kingdom Met Office's low-frequency lightning detection network. Previous measurements of volcanic plumes have shown that ash particles are electrically charged up to hundreds of kilometers away from the vent, which indicates that the ash continues to charge in the plume [R. G. Harrison, K. A. Nicoll, Z. Ulanowski, and T. A. Mather, Environ. Res. Lett. 5, 024004 (2010); H. Hatakeyama J. Meteorol. Soc. Jpn. 27, 372 (1949)]. In this Letter, we study triboelectric charging of different size fractions of a sample of volcanic ash experimentally. Consistently with previous work, we find that the particle size distribution is a determining factor in the charging. Specifically, our laboratory experiments demonstrate that the normalized span of the particle size distribution plays an important role in the magnitude of charging generated. The influence of the normalized span on plume charging suggests that all ash plumes are likely to be charged, with implications for remote sensing and plume lifetime through scavenging effects.
Triboelectric Charging of Volcanic Ash from the 2011 Grímsvötn Eruption
NASA Astrophysics Data System (ADS)
Houghton, Isobel M. P.; Aplin, Karen L.; Nicoll, Keri A.
2013-09-01
The plume from the 2011 eruption of Grímsvötn was highly electrically charged, as shown by the considerable lightning activity measured by the United Kingdom Met Office’s low-frequency lightning detection network. Previous measurements of volcanic plumes have shown that ash particles are electrically charged up to hundreds of kilometers away from the vent, which indicates that the ash continues to charge in the plume [R. G. Harrison, K. A. Nicoll, Z. Ulanowski, and T. A. Mather, Environ. Res. Lett. 5, 024004 (2010)1748-932610.1088/1748-9326/5/2/024004; H. Hatakeyama J. Meteorol. Soc. Jpn. 27, 372 (1949)JMSJAU0026-1165]. In this Letter, we study triboelectric charging of different size fractions of a sample of volcanic ash experimentally. Consistently with previous work, we find that the particle size distribution is a determining factor in the charging. Specifically, our laboratory experiments demonstrate that the normalized span of the particle size distribution plays an important role in the magnitude of charging generated. The influence of the normalized span on plume charging suggests that all ash plumes are likely to be charged, with implications for remote sensing and plume lifetime through scavenging effects.
Time-evolution of grain size distributions in random nucleation and growth crystallization processes
NASA Astrophysics Data System (ADS)
Teran, Anthony V.; Bill, Andreas; Bergmann, Ralf B.
2010-02-01
We study the time dependence of the grain size distribution N(r,t) during crystallization of a d -dimensional solid. A partial differential equation, including a source term for nuclei and a growth law for grains, is solved analytically for any dimension d . We discuss solutions obtained for processes described by the Kolmogorov-Avrami-Mehl-Johnson model for random nucleation and growth (RNG). Nucleation and growth are set on the same footing, which leads to a time-dependent decay of both effective rates. We analyze in detail how model parameters, the dimensionality of the crystallization process, and time influence the shape of the distribution. The calculations show that the dynamics of the effective nucleation and effective growth rates play an essential role in determining the final form of the distribution obtained at full crystallization. We demonstrate that for one class of nucleation and growth rates, the distribution evolves in time into the logarithmic-normal (lognormal) form discussed earlier by Bergmann and Bill [J. Cryst. Growth 310, 3135 (2008)]. We also obtain an analytical expression for the finite maximal grain size at all times. The theory allows for the description of a variety of RNG crystallization processes in thin films and bulk materials. Expressions useful for experimental data analysis are presented for the grain size distribution and the moments in terms of fundamental and measurable parameters of the model.
Villegas, Fernanda; Tilly, Nina; Ahnesjö, Anders
2013-09-07
The stochastic nature of ionizing radiation interactions causes a microdosimetric spread in energy depositions for cell or cell nucleus-sized volumes. The magnitude of the spread may be a confounding factor in dose response analysis. The aim of this work is to give values for the microdosimetric spread for a range of doses imparted by (125)I and (192)Ir brachytherapy radionuclides, and for a (60)Co source. An upgraded version of the Monte Carlo code PENELOPE was used to obtain frequency distributions of specific energy for each of these radiation qualities and for four different cell nucleus-sized volumes. The results demonstrate that the magnitude of the microdosimetric spread increases when the target size decreases or when the energy of the radiation quality is reduced. Frequency distributions calculated according to the formalism of Kellerer and Chmelevsky using full convolution of the Monte Carlo calculated single track frequency distributions confirm that at doses exceeding 0.08 Gy for (125)I, 0.1 Gy for (192)Ir, and 0.2 Gy for (60)Co, the resulting distribution can be accurately approximated with a normal distribution. A parameterization of the width of the distribution as a function of dose and target volume of interest is presented as a convenient form for the use in response modelling or similar contexts.
On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Rincon, Rafael; Liao, Liang
2003-01-01
Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been used to acquire data over a 2 km path. These data should help quantify the temporal form of the covariance matrix at this site.
Jian, Yutao; He, Zi-Hua; Dao, Li; Swain, Michael V; Zhang, Xin-Ping; Zhao, Ke
2017-04-01
To investigate and characterize the distribution of fabrication defects in bilayered lithium disilicate glass-ceramic (LDG) crowns using micro-CT and 3D reconstruction. Ten standardized molar crowns (IPS e.max Press; Ivoclar Vivadent) were fabricated by heat-pressing on a core and subsequent manual veneering. All crowns were scanned by micro-CT and 3D reconstructed. Volume, position and sphericity of each defect was measured in every crown. Each crown was divided into four regions-central fossa (CF), occlusal fossa (OF), cusp (C) and axial wall (AW). Porosity and number density of each region were calculated. Statistical analyses were performed using Welch two sample t-test, Friedman one-way rank sum test and Nemenyi post-hoc test. The defect volume distribution type was determined based on Akaike information criterion (AIC). The core ceramic contained fewer defects (p<0.001) than the veneer layer. The size of smaller defects, which were 95% of the total, obeyed a logarithmic normal distribution. Region CF showed higher porosity (p<0.001) than the other regions. Defect number density of region CF was higher than region C (p<0.001) and region AW (p=0.029), but no difference was found between region CF and OF (p>0.05). Four of ten specimens contained the largest pores in region CF, while for the remaining six specimens the largest pore was in region OF. LDG core ceramic contained fewer defects than the veneer ceramic. LDG strength estimated from pore size was comparable to literature values. Large defects were more likely to appear at the core-veneer interface of occlusal fossa, while small defects also distributed in every region of the crowns but tended to aggregate in the central fossa region. Size distribution of small defects in veneer obeyed a logarithmic normal distribution. Copyright © 2017. Published by Elsevier Ltd.
Characterising fabric, force distributions and porosity evolution in sheared granular media
NASA Astrophysics Data System (ADS)
Mair, Karen; Abe, Steffen; Jettestuen, Espen
2014-05-01
Active faults, landslides, subglacial tills and poorly or unconsolidated sands essentially contain accumulations of granular debris that evolve under load. Both the macroscopic motions and the bulk fluid flow characteristics that result are determined by the particular grain scale processes operating in this deformed or transformed granular material. A relevant question is how the local behavior at the individual granular contacts actually sums up, and in particular how the load bearing skeleton (an important expression of connected load) and spatial distribution of pore space (and hence fluid pathways) are linked. Here we investigate the spatial distribution of porosity with granular rearrangements (specifically contact force network characteristics) produced in 3D discrete element models of granular layers under shear. We use percolation measures to identify, characterize, compare and track the evolution of strongly connected contact force networks. We show that specific topological measures used in describing the networks, such as number of contacts and coordination number, are sensitive to grain size distribution of the material as well as loading conditions. In addition we probe the 3D spatial distribution of porosity as a function of increasing strain. Two cases will be considered. The first, a non-fracture regime where configurational changes occur during shear but grain size distribution remains constant. This would be expected for a soil or granular material under relatively low normal loading. Secondly we consider a fragmentation regime where the grain size distributions of the granular material evolve with accumulated strain. This mirrors the scenario for faults or basal shear zones of slides under higher normal stress where comminution is typically a mark of increasing maturity and plays a major role in the poro-perm evolution of the system. We will present the correlated and anti-correlated features appearing in our simulations as well as discussing the triggers and relative persistence of fluid pathway creation versus destruction mechanisms. We will also demonstrate how the individual grain interactions are manifested in the macroscopic sliding behavior we observe.
Stress distribution in two-dimensional silos
NASA Astrophysics Data System (ADS)
Blanco-Rodríguez, Rodolfo; Pérez-Ángel, Gabriel
2018-01-01
Simulations of a polydispersed two-dimensional silo were performed using molecular dynamics, with different numbers of grains reaching up to 64 000, verifying numerically the model derived by Janssen and also the main assumption that the walls carry part of the weight due to the static friction between grains with themselves and those with the silo's walls. We vary the friction coefficient, the radii dispersity, the silo width, and the size of grains. We find that the Janssen's model becomes less relevant as the the silo width increases since the behavior of the stresses becomes more hydrostatic. Likewise, we get the normal and tangential stress distribution on the walls evidencing the existence of points of maximum stress. We also obtained the stress matrix with which we observe zones of concentration of load, located always at a height around two thirds of the granular columns. Finally, we observe that the size of the grains affects the distribution of stresses, increasing the weight on the bottom and reducing the normal stress on the walls, as the grains are made smaller (for the same total mass of the granulate), giving again a more hydrostatic and therefore less Janssen-type behavior for the weight of the column.
Zhu, Qiaohao; Carriere, K C
2016-01-01
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
NASA Astrophysics Data System (ADS)
Sun, Ning-Chen; de Grijs, Richard; Cioni, Maria-Rosa L.; Rubele, Stefano; Subramanian, Smitha; van Loon, Jacco Th.; Bekki, Kenji; Bell, Cameron P. M.; Ivanov, Valentin D.; Marconi, Marcella; Muraveva, Tatiana; Oliveira, Joana M.; Ripepi, Vincenzo
2018-05-01
In this paper we report a clustering analysis of upper main-sequence stars in the Small Magellanic Cloud, using data from the VMC survey (the VISTA near-infrared YJK s survey of the Magellanic system). Young stellar structures are identified as surface overdensities on a range of significance levels. They are found to be organized in a hierarchical pattern, such that larger structures at lower significance levels contain smaller ones at higher significance levels. They have very irregular morphologies, with a perimeter–area dimension of 1.44 ± 0.02 for their projected boundaries. They have a power-law mass–size relation, power-law size/mass distributions, and a log-normal surface density distribution. We derive a projected fractal dimension of 1.48 ± 0.03 from the mass–size relation, or of 1.4 ± 0.1 from the size distribution, reflecting significant lumpiness of the young stellar structures. These properties are remarkably similar to those of a turbulent interstellar medium, supporting a scenario of hierarchical star formation regulated by supersonic turbulence.
Angular Momentum Transfer and Fractional Moment of Inertia in Pulsar Glitches
NASA Astrophysics Data System (ADS)
Eya, I. O.; Urama, J. O.; Chukwude, A. E.
2017-05-01
We use the Jodrell Bank Observatory glitch database containing 472 glitches from 165 pulsars to investigate the angular momentum transfer during rotational glitches in pulsars. Our emphasis is on pulsars with at least five glitches, of which there are 26 that exhibit 261 glitches in total. This paper identifies four pulsars in which the angular momentum transfer, after many glitches, is almost linear with time. The Lilliefore test on the cumulative distribution of glitch spin-up sizes in these glitching pulsars shows that glitch sizes in 12 pulsars are normally distributed, suggesting that their glitches originate from the same momentum reservoir. In addition, the distribution of the fractional moment of inertia (I.e., the ratio of the moment of inertia of neutron star components that are involved in the glitch process) have a single mode, unlike the distribution of fractional glitch size (Δν/ν), which is usually bimodal. The mean fractional moment of inertia in the glitching pulsars we sampled has a very weak correlation with the pulsar spin properties, thereby supporting a neutron star interior mechanism for the glitch phenomenon.
Angular Momentum Transfer and Fractional Moment of Inertia in Pulsar Glitches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eya, I. O.; Urama, J. O.; Chukwude, A. E., E-mail: innocent.eya@unn.edu.ng, E-mail: innocent.eya@gmail.com
We use the Jodrell Bank Observatory glitch database containing 472 glitches from 165 pulsars to investigate the angular momentum transfer during rotational glitches in pulsars. Our emphasis is on pulsars with at least five glitches, of which there are 26 that exhibit 261 glitches in total. This paper identifies four pulsars in which the angular momentum transfer, after many glitches, is almost linear with time. The Lilliefore test on the cumulative distribution of glitch spin-up sizes in these glitching pulsars shows that glitch sizes in 12 pulsars are normally distributed, suggesting that their glitches originate from the same momentum reservoir.more » In addition, the distribution of the fractional moment of inertia (i.e., the ratio of the moment of inertia of neutron star components that are involved in the glitch process) have a single mode, unlike the distribution of fractional glitch size (Δ ν / ν ), which is usually bimodal. The mean fractional moment of inertia in the glitching pulsars we sampled has a very weak correlation with the pulsar spin properties, thereby supporting a neutron star interior mechanism for the glitch phenomenon.« less
Empirical Reference Distributions for Networks of Different Size
Smith, Anna; Calder, Catherine A.; Browning, Christopher R.
2016-01-01
Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556
Substitution of stable isotopes in Chlorella
NASA Technical Reports Server (NTRS)
Flaumenhaft, E.; Katz, J. J.; Uphaus, R. A.
1969-01-01
Replacement of biologically important isotopes in the alga Chlorella by corresponding heavier stable isotopes produces increasingly greater deviations from the normal cell size and changes the quality and distribution of certain cellular components. The usefulness of isotopically altered organisms increases interest in the study of such permuted organisms.
NASA Astrophysics Data System (ADS)
Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.
2017-04-01
Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.
The impact of sample non-normality on ANOVA and alternative methods.
Lantz, Björn
2013-05-01
In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.
Human preocular mucins reflect changes in surface physiology.
Berry, M; Ellingham, R B; Corfield, A P
2004-03-01
Mucin function is associated with both peptide core and glycosylation characteristics. The authors assessed whether structural alterations occurring during mucin residence in the tear film reflect changes in ocular surface physiology. Ocular surface mucus was collected from normal volunteers as N-acetyl cysteine (NAcCys) washes or directly from the speculum after cataract surgery. To assess the influence of surface health on mucins, NAcCys washings were also obtained from patients with symptoms, but no clinical signs of dry eye (symptomatics). Mucins were extracted in guanidine hydrochloride (GuHCl) with protease inhibitors. Buoyant density of mucin species, a correlate of glycosylation density, was followed by reactivity with anti-peptide core antibodies. Mucin hydrodynamic volume was assessed by gel filtration on Sepharose CL2B. Surface fluid and mucus contained soluble forms of MUC1, MUC2, MUC4, and MUC5AC and also the same species requiring DTT solubilisation. Reactivity with antibodies to MUC2 and MUC5AC peaked at 1.3-1.5 g/ml in normals, while dominated by underglycosylated forms in symptomatics. Surface mucins were predominantly smaller than intracellular species. MUC2 size distributions were different in symptomatics and normals, while those of MUC5AC were similar in these two groups. A reduction in surface mucin size indicates post-secretory cleavage. Dissimilarities in surface mucin glycosylation and individual MUC size distributions in symptomatics suggest changes in preocular mucin that might precede dry eye signs.
Human preocular mucins reflect changes in surface physiology
Berry, M; Ellingham, R B; Corfield, A P
2004-01-01
Background/aims: Mucin function is associated with both peptide core and glycosylation characteristics. The authors assessed whether structural alterations occurring during mucin residence in the tear film reflect changes in ocular surface physiology. Methods: Ocular surface mucus was collected from normal volunteers as N-acetyl cysteine (NAcCys) washes or directly from the speculum after cataract surgery. To assess the influence of surface health on mucins, NAcCys washings were also obtained from patients with symptoms, but no clinical signs of dry eye (symptomatics). Mucins were extracted in guanidine hydrochloride (GuHCl) with protease inhibitors. Buoyant density of mucin species, a correlate of glycosylation density, was followed by reactivity with anti-peptide core antibodies. Mucin hydrodynamic volume was assessed by gel filtration on Sepharose CL2B. Results: Surface fluid and mucus contained soluble forms of MUC1, MUC2, MUC4, and MUC5AC and also the same species requiring DTT solubilisation. Reactivity with antibodies to MUC2 and MUC5AC peaked at 1.3–1.5 g/ml in normals, while dominated by underglycosylated forms in symptomatics. Surface mucins were predominantly smaller than intracellular species. MUC2 size distributions were different in symptomatics and normals, while those of MUC5AC were similar in these two groups. Conclusions: A reduction in surface mucin size indicates post-secretory cleavage. Dissimilarities in surface mucin glycosylation and individual MUC size distributions in symptomatics suggest changes in preocular mucin that might precede dry eye signs. PMID:14977773
An asymptotic analysis of the logrank test.
Strawderman, R L
1997-01-01
Asymptotic expansions for the null distribution of the logrank statistic and its distribution under local proportional hazards alternatives are developed in the case of iid observations. The results, which are derived from the work of Gu (1992) and Taniguchi (1992), are easy to interpret, and provide some theoretical justification for many behavioral characteristics of the logrank test that have been previously observed in simulation studies. We focus primarily upon (i) the inadequacy of the usual normal approximation under treatment group imbalance; and, (ii) the effects of treatment group imbalance on power and sample size calculations. A simple transformation of the logrank statistic is also derived based on results in Konishi (1991) and is found to substantially improve the standard normal approximation to its distribution under the null hypothesis of no survival difference when there is treatment group imbalance.
Design and characterization of a cough simulator.
Zhang, Bo; Zhu, Chao; Ji, Zhiming; Lin, Chao-Hsin
2017-02-23
Expiratory droplets from human coughing have always been considered as potential carriers of pathogens, responsible for respiratory infectious disease transmission. To study the transmission of disease by human coughing, a transient repeatable cough simulator has been designed and built. Cough droplets are generated by different mechanisms, such as the breaking of mucus, condensation and high-speed atomization from different depths of the respiratory tract. These mechanisms in coughing produce droplets of different sizes, represented by a bimodal distribution of 'fine' and 'coarse' droplets. A cough simulator is hence designed to generate transient sprays with such bimodal characteristics. It consists of a pressurized gas tank, a nebulizer and an ejector, connected in series, which are controlled by computerized solenoid valves. The bimodal droplet size distribution is characterized for the coarse droplets and fine droplets, by fibrous collection and laser diffraction, respectively. The measured size distributions of coarse and fine droplets are reasonably represented by the Rosin-Rammler and log-normal distributions in probability density function, which leads to a bimodal distribution. To assess the hydrodynamic consequences of coughing including droplet vaporization and polydispersion, a Lagrangian model of droplet trajectories is established, with its ambient flow field predetermined from a computational fluid dynamics simulation.
Atomisation and droplet formation mechanisms in a model two-phase mixing layer
NASA Astrophysics Data System (ADS)
Zaleski, Stephane; Ling, Yue; Fuster, Daniel; Tryggvason, Gretar
2017-11-01
We study atomization in a turbulent two-phase mixing layer inspired by the Grenoble air-water experiments. A planar gas jet of large velocity is emitted on top of a planar liquid jet of smaller velocity. The density ratio and momentum ratios are both set at 20 in the numerical simulation in order to ease the simulation. We use a Volume-Of-Fluid method with good parallelisation properties, implemented in our code http://parissimulator.sf.net. Our simulations show two distinct droplet formation mechanisms, one in which thin liquid sheets are punctured to form rapidly expanding holes and the other in which ligaments of irregular shape form and breakup in a manner similar but not identical to jets in Rayleigh-Plateau-Savart instabilities. Observed distributions of particle sizes are extracted for a sequence of ever more refined grids, the largest grid containing approximately eight billion points. Although their accuracy is limited at small sizes by the grid resolution and at large size by statistical effects, the distributions overlap in the central region. The observed distributions are much closer to log normal distributions than to gamma distributions as is also the case for experiments.
Photoballistics of volcanic jet activity at Stromboli, Italy
NASA Technical Reports Server (NTRS)
Chouet, B.; Hamisevicz, N.; Mcgetchin, T. R.
1974-01-01
Two night eruptions of the volcano Stromboli were studied through 70-mm photography. Single-camera techniques were used. Particle sphericity, constant velocity in the frame, and radial symmetry were assumed. Properties of the particulate phase found through analysis include: particle size, velocity, total number of particles ejected, angular dispersion and distribution in the jet, time variation of particle size and apparent velocity distribution, averaged volume flux, and kinetic energy carried by the condensed phase. The frequency distributions of particle size and apparent velocities are found to be approximately log normal. The properties of the gas phase were inferred from the fact that it was the transporting medium for the condensed phase. Gas velocity and time variation, volume flux of gas, dynamic pressure, mass erupted, and density were estimated. A CO2-H2O mixture is possible for the observed eruptions. The flow was subsonic. Velocity variations may be explained by an organ pipe resonance. Particle collimation may be produced by a Magnus effect.
Mean estimation in highly skewed samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pederson, S P
The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.
Backscatter and extinction measurements in cloud and drizzle at CO2 laser wavelengths
NASA Technical Reports Server (NTRS)
Jennings, S. G.
1986-01-01
The backscatter and extinction of laboratory generated cloud and drizzle sized water drops were measured at carbon dioxide laser wavelengths (predominately at lambda = 10.591 micrometers). Two distinctly different drop size regimes were studied: one which covers the range normally encompassed by natural cloud droplets and the other representative of mist or drizzle sized drops. The derivation and verification of the relation between extinction and backscatter at carbon dioxide laser wavelengths should allow the determination of large cloud drop and drizzle extinction coefficient solely from a lidar return signal without requiring knowledge of the drop size distribution. This result will also apply to precipitation sized drops so long as they are spherical.
NASA Astrophysics Data System (ADS)
Usselman, Robert J.; Russek, Stephen E.; Klem, Michael T.; Allen, Mark A.; Douglas, Trevor; Young, Mark; Idzerda, Yves U.; Singel, David J.
2012-10-01
Electron magnetic resonance (EMR) spectroscopy was used to determine the magnetic properties of maghemite (γ-Fe2O3) nanoparticles formed within size-constraining Listeria innocua (LDps)-(DNA-binding protein from starved cells) protein cages that have an inner diameter of 5 nm. Variable-temperature X-band EMR spectra exhibited broad asymmetric resonances with a superimposed narrow peak at a gyromagnetic factor of g ≈ 2. The resonance structure, which depends on both superparamagnetic fluctuations and inhomogeneous broadening, changes dramatically as a function of temperature, and the overall linewidth becomes narrower with increasing temperature. Here, we compare two different models to simulate temperature-dependent lineshape trends. The temperature dependence for both models is derived from a Langevin behavior of the linewidth resulting from "anisotropy melting." The first uses either a truncated log-normal distribution of particle sizes or a bi-modal distribution and then a Landau-Liftshitz lineshape to describe the nanoparticle resonances. The essential feature of this model is that small particles have narrow linewidths and account for the g ≈ 2 feature with a constant resonance field, whereas larger particles have broad linewidths and undergo a shift in resonance field. The second model assumes uniform particles with a diameter around 4 nm and a random distribution of uniaxial anisotropy axes. This model uses a more precise calculation of the linewidth due to superparamagnetic fluctuations and a random distribution of anisotropies. Sharp features in the spectrum near g ≈ 2 are qualitatively predicted at high temperatures. Both models can account for many features of the observed spectra, although each has deficiencies. The first model leads to a nonphysical increase in magnetic moment as the temperature is increased if a log normal distribution of particles sizes is used. Introducing a bi-modal distribution of particle sizes resolves the unphysical increase in moment with temperature. The second model predicts low-temperature spectra that differ significantly from the observed spectra. The anisotropy energy density K1, determined by fitting the temperature-dependent linewidths, was ˜50 kJ/m3, which is considerably larger than that of bulk maghemite. The work presented here indicates that the magnetic properties of these size-constrained nanoparticles and more generally metal oxide nanoparticles with diameters d < 5 nm are complex and that currently existing models are not sufficient for determining their magnetic resonance signatures.
Guzmán-de la Garza, Francisco J; González Ayala, Alejandra E; Gómez Nava, Marisol; Martínez Monsiváis, Leislie I; Salinas Martínez, Ana M; Ramírez López, Erik; Mathiew Quirós, Alvaro; Garcia Quintanilla, Francisco
2017-09-10
The main aim of this study was to test the hypothesis that body frame size is related to the amount of fat in different adipose tissue depots and to fat distribution in schoolchildren. Children aged between 5 and 10 years were included in this cross-sectional study (n = 565). Body frame size, adiposity markers (anthropometric, skinfolds thickness, and ultrasound measures), and fat distribution indices were analyzed. Correlation coefficients adjusted by reliability were estimated and analyzed by sex; the significance of the difference between two correlation coefficients was assessed using the Fisher z-transformation. The sample included primarily urban children; 58.6% were normal weight, 16.1% overweight, 19.6% obese, and the rest were underweight. Markers of subcutaneous adiposity, fat mass and fat-free mass, and preperitoneal adiposity showed higher and significant correlations with the sum of the biacromial + bitrochanteric diameter than with the elbow diameter, regardless of sex. The fat distribution conicity index presented significant but weak correlations; and visceral adipose tissue, hepatic steatosis, and the waist-for-hip ratio were not significantly correlated with body frame size measures. Body frame size in school children was related to the amount of adipose tissue in different depots, but not adipose distribution. More studies are needed to confirm this relationship and its importance to predict changes in visceral fat deposition during growth. © 2017 Wiley Periodicals, Inc.
2015-09-01
Extremely Lightweight Intrusion Detection (ELIDe) algorithm on an Android -based mobile device. Our results show that the hashing and inner product...approximately 2.5 megabits per second (assuming a normal distribution of packet sizes) with no significant packet loss. 15. SUBJECT TERMS ELIDe, Android , pcap...system (OS). To run ELIDe, the current version was ported for use on Android .4 2.1 Mobile Device After ELIDe was ported to the Android mobile
Numerical modeling of nanodrug distribution in tumors with heterogeneous vasculature.
Chou, Cheng-Ying; Chang, Wan-I; Horng, Tzyy-Leng; Lin, Win-Li
2017-01-01
The distribution and accumulation of nanoparticle dosage in a tumor are important in evaluating the effectiveness of cancer treatment. The cell survival rate can quantify the therapeutic effect, and the survival rates after multiple treatments are helpful to evaluate the efficacy of a chemotherapy plan. We developed a mathematical tumor model based on the governing equations describing the fluid flow and particle transport to investigate the drug transportation in a tumor and computed the resulting cumulative concentrations. The cell survival rate was calculated based on the cumulative concentration. The model was applied to a subcutaneous tumor with heterogeneous vascular distributions. Various sized dextrans and doxorubicin were respectively chosen as the nanodrug carrier and the traditional chemotherapeutic agent for comparison. The results showed that: 1) the largest nanoparticle drug in the current simulations yielded the highest cumulative concentration in the well vascular region, but second lowest in the surrounding normal tissues, which implies it has the best therapeutic effect to tumor and at the same time little harmful to normal tissue; 2) on the contrary, molecular chemotherapeutic agent produced the second lowest cumulative concentration in the well vascular tumor region, but highest in the surrounding normal tissue; 3) all drugs have very small cumulative concentrations in the tumor necrotic region, where drug transport is solely through diffusion. This might mean that it is hard to kill tumor stem cells hiding in it. The current model indicated that the effectiveness of the anti-tumor drug delivery was determined by the interplay of the vascular density and nanoparticle size, which governs the drug transport properties. The use of nanoparticles as anti-tumor drug carriers is generally a better choice than molecular chemotherapeutic agent because of its high treatment efficiency on tumor cells and less damage to normal tissues.
Creating a Bimodal Drop-Size Distribution in the NASA Glenn Icing Research Tunnel
NASA Technical Reports Server (NTRS)
King-Steen, Laura E.; Ide, Robert F.
2017-01-01
The Icing Research Tunnel at NASA Glenn has demonstrated that they can create a drop-size distribution that matches the FAA Part 25 Appendix O FZDZ, MVD <40 microns normalized cumulative volume within 10%. This is done by simultaneously spraying the Standard and Mod1 nozzles at the same nozzle air pressure and different nozzle water pressures. It was also found through these tests that the distributions that are measured when the two nozzle sets are sprayed simultaneously closely matched what was found by combining the two individual distributions analytically. Additionally, distributions were compared between spraying all spraybars and also by spraying only every-other spraybar, and were found to match within 4%. The cloud liquid water content uniformity for this condition has been found to be excellent. It should be noted, however, that the liquid water content for this condition in the IRT is much higher than the requirement specified in Part 25 Appendix O.
Creating a Bimodal Drop-Size Distribution in the NASA Glenn Icing Research Tunnel
NASA Technical Reports Server (NTRS)
King-Steen, Laura E.; Ide, Robert F.
2017-01-01
The Icing Research Tunnel at NASA Glenn has demonstrated that they can create a drop-size distribution that matches the FAA Part 25 Appendix O FZDZ, MVD40 m normalized cumulative volume within 10. This is done by simultaneously spraying the Standard and Mod1 nozzles at the same nozzle air pressure and different nozzle water pressures. It was also found through these tests that the distributions that are measured when the two nozzle sets are sprayed simultaneously closely matched what was found by combining the two individual distributions analytically. Additionally, distributions were compared between spraying all spraybars and also by spraying only every-other spraybar, and were found to match within 4. The cloud liquid water content uniformity for this condition has been found to be excellent: 10. It should be noted, however, that the liquid water content for this condition in the IRT is much higher than the requirement specified in Part 25 Appendix O.
Statistical computation of tolerance limits
NASA Technical Reports Server (NTRS)
Wheeler, J. T.
1993-01-01
Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Bayesian Estimation Supersedes the "t" Test
ERIC Educational Resources Information Center
Kruschke, John K.
2013-01-01
Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional "t" tests) when certainty in the estimate is…
Introduction to Permutation and Resampling-Based Hypothesis Tests
ERIC Educational Resources Information Center
LaFleur, Bonnie J.; Greevy, Robert A.
2009-01-01
A resampling-based method of inference--permutation tests--is often used when distributional assumptions are questionable or unmet. Not only are these methods useful for obvious departures from parametric assumptions (e.g., normality) and small sample sizes, but they are also more robust than their parametric counterparts in the presences of…
A Noncentral "t" Regression Model for Meta-Analysis
ERIC Educational Resources Information Center
Camilli, Gregory; de la Torre, Jimmy; Chiu, Chia-Yi
2010-01-01
In this article, three multilevel models for meta-analysis are examined. Hedges and Olkin suggested that effect sizes follow a noncentral "t" distribution and proposed several approximate methods. Raudenbush and Bryk further refined this model; however, this procedure is based on a normal approximation. In the current research literature, this…
The Distribution of Obesity Phenotypes in HIV-Infected African Population
Nguyen, Kim Anh; Peer, Nasheeta; de Villiers, Anniza; Mukasa, Barbara; Matsha, Tandi E.; Mills, Edward J.; Kengne, Andre Pascal
2016-01-01
The distribution of body size phenotypes in people with human immunodeficiency virus (HIV) infection has yet to be characterized. We assessed the distribution of body size phenotypes overall, and according to antiretroviral therapy (ART), diagnosed duration of the infection and CD4 count in a sample of HIV infected people recruited across primary care facilities in the Western Cape Province, South Africa. Adults aged ≥ 18 years were consecutively recruited using random sampling procedures, and their cardio-metabolic profile were assessed during March 2014 and February 2015. They were classified across body mass index (BMI) categories as normal-weight (BMI < 25 kg/m2), overweight (25 ≤ BMI < 30 kg/m2), and obese (BMI ≥ 30 kg/m2), and further classified according to their metabolic status as “metabolically healthy” vs. “metabolically abnormal” if they had less than two vs. two or more of the following abnormalities: high blood glucose, raised blood pressure, raised triglycerides, and low HDL-cholesterol. Their cross-classification gave the following six phenotypes: normal-weight metabolically healthy (NWMH), normal-weight metabolically abnormal (NWMA), overweight metabolically healthy (OvMH), overweight metabolically abnormal (OvMA), obese metabolically healthy (OMH), and obese metabolically abnormal (OMA). Among the 748 participants included (median age 38 years (25th–75th percentiles: 32–44)), 79% were women. The median diagnosed duration of HIV was five years; the median CD4 count was 392 cells/mm3 and most participants were on ART. The overall distribution of body size phenotypes was the following: 31.7% (NWMH), 11.7% (NWMA), 13.4% (OvMH), 9.5% (OvMA), 18.6% (OMH), and 15.1% (OMA). The distribution of metabolic phenotypes across BMI levels did not differ significantly in men vs. women (p = 0.062), in participants below vs. those at or above median diagnosed duration of HIV infection (p = 0.897), in participants below vs. those at or above median CD4 count (p = 0.447), and by ART regimens (p = 0.205). In this relatively young sample of HIV-infected individuals, metabolically abnormal phenotypes are frequent across BMI categories. This highlights the importance of general measures targeting an overall improvement in cardiometabolic risk profile across the spectrum of BMI distribution in all adults with HIV. PMID:27271659
NASA Astrophysics Data System (ADS)
Einstein, Theodore L.; Pimpinelli, Alberto; González, Diego Luis; Morales-Cifuentes, Josue R.
2015-09-01
In studies of epitaxial growth, analysis of the distribution of the areas of capture zones (i.e. proximity polygons or Voronoi tessellations with respect to island centers) is often the best way to extract the critical nucleus size i. For non-random nucleation the normalized areas s of these Voronoi cells are well described by the generalized Wigner distribution (GWD) Pβ(s) = asβ exp(-bs2), particularly in the central region 0.5 < s < 2 where data are least noisy. Extensive Monte Carlo simulations reveal inadequacies of our earlier mean field analysis, suggesting β = i + 2 for diffusion-limited aggregation (DLA). Since simulations generate orders of magnitude more data than experiments, they permit close examination of the tails of the distribution, which differ from the simple GWD form. One refinement is based on a fragmentation model. We also compare island-size distributions. We compare analysis by island-size distribution and by scaling of island density with flux. Modifications appear for attach-limited aggregation (ALA). We focus on the experimental system para-hexaphenyl on amorphous mica, comparing the results of the three analysis techniques and reconciling their results via a novel model of hot precursors based on rate equations, pointing out the existence of intermediate scaling regimes between DLA and ALA.
NASA Astrophysics Data System (ADS)
Lane, Rebecca E.; Korbie, Darren; Anderson, Will; Vaidyanathan, Ramanathan; Trau, Matt
2015-01-01
Exosomes are vesicles which have garnered interest due to their diagnostic and therapeutic potential. Isolation of pure yields of exosomes from complex biological fluids whilst preserving their physical characteristics is critical for downstream applications. In this study, we use 100 nm-liposomes from 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) and cholesterol as a model system as a model system to assess the effect of exosome isolation protocols on vesicle recovery and size distribution using a single-particle analysis method. We demonstrate that liposome size distribution and ζ-potential are comparable to extracted exosomes, making them an ideal model for comparison studies. Four different purification protocols were evaluated, with liposomes robustly isolated by three of them. Recovered yields varied and liposome size distribution was unaltered during processing, suggesting that these protocols do not induce particle aggregation. This leads us to conclude that the size distribution profile and characteristics of vesicles are stably maintained during processing and purification, suggesting that reports detailing how exosomes derived from tumour cells differ in size to those from normal cells are reporting a real phenomenon. However, we hypothesize that larger particles present in most purified exosome samples represent co-purified contaminating non-exosome debris. These isolation techniques are therefore likely nonspecific and may co-isolate non-exosome material of similar physical properties.
Kornilov, Oleg; Toennies, J Peter
2008-05-21
Clusters consisting of normal H2 molecules, produced in a free jet expansion, are size selected by diffraction from a transmission nanograting prior to electron impact ionization. For each neutral cluster (H2)(N) (N=2-40), the relative intensities of the ion fragments Hn+ are measured with a mass spectrometer. H3+ is found to be the most abundant fragment up to N=17. With a further increase in N, the abundances of H3+, H5+, H7+, and H9+ first increase and, after passing through a maximum, approach each other. At N=40, they are about the same and more than a factor of 2 and 3 larger than for H11+ and H13+, respectively. For a given neutral cluster size, the intensities of the ion fragments follow a Poisson distribution. The fragmentation probabilities are used to determine the neutral cluster size distribution produced in the expansion at a source temperature of 30.1 K and a source pressure of 1.50 bar. The distribution shows no clear evidence of a magic number N=13 as predicted by theory and found in experiments with pure para-H2 clusters. The ion fragment distributions are also used to extract information on the internal energy distribution of the H3+ ions produced in the reaction H2+ + H2-->H3+ +H, which is initiated upon ionization of the cluster. The internal energy is assumed to be rapidly equilibrated and to determine the number of molecules subsequently evaporated. The internal energy distribution found in this way is in good agreement with data obtained in an earlier independent merged beam scattering experiment.
NASA Astrophysics Data System (ADS)
Liu, Yanxiao; Xiang, Yongyuan; Erdélyi, Robertus; Liu, Zhong; Li, Dong; Ning, Zongjun; Bi, Yi; Wu, Ning; Lin, Jun
2018-03-01
Properties of photospheric bright points (BPs) near an active region have been studied in TiO λ 7058 Å images observed by the New Vacuum Solar Telescope of the Yunnan Observatories. We developed a novel recognition method that was used to identify and track 2010 BPs. The observed evolving BPs are classified into isolated (individual) and non-isolated (where multiple BPs are observed to display splitting and merging behaviors) sets. About 35.1% of BPs are non-isolated. For both isolated and non-isolated BPs, the brightness varies from 0.8 to 1.3 times the average background intensity and follows a Gaussian distribution. The lifetimes of BPs follow a log-normal distribution, with characteristic lifetimes of (267 ± 140) s and (421 ± 255) s, respectively. Their size also follows log-normal distribution, with an average size of about (2.15 ± 0.74) × 104 km2 and (3.00 ± 1.31) × 104 km2 for area, and (163 ± 27) km and (191 ± 40) km for diameter, respectively. Our results indicate that regions with strong background magnetic field have higher BP number density and higher BP area coverage than regions with weak background field. Apparently, the brightness/size of BPs does not depend on the background field. Lifetimes in regions with strong background magnetic field are shorter than those in regions with weak background field, on average.
Grain coarsening in two-dimensional phase-field models with an orientation field
NASA Astrophysics Data System (ADS)
Korbuly, Bálint; Pusztai, Tamás; Henry, Hervé; Plapp, Mathis; Apel, Markus; Gránásy, László
2017-05-01
In the literature, contradictory results have been published regarding the form of the limiting (long-time) grain size distribution (LGSD) that characterizes the late stage grain coarsening in two-dimensional and quasi-two-dimensional polycrystalline systems. While experiments and the phase-field crystal (PFC) model (a simple dynamical density functional theory) indicate a log-normal distribution, other works including theoretical studies based on conventional phase-field simulations that rely on coarse grained fields, like the multi-phase-field (MPF) and orientation field (OF) models, yield significantly different distributions. In a recent work, we have shown that the coarse grained phase-field models (whether MPF or OF) yield very similar limiting size distributions that seem to differ from the theoretical predictions. Herein, we revisit this problem, and demonstrate in the case of OF models [R. Kobayashi, J. A. Warren, and W. C. Carter, Physica D 140, 141 (2000), 10.1016/S0167-2789(00)00023-3; H. Henry, J. Mellenthin, and M. Plapp, Phys. Rev. B 86, 054117 (2012), 10.1103/PhysRevB.86.054117] that an insufficient resolution of the small angle grain boundaries leads to a log-normal distribution close to those seen in the experiments and the molecular scale PFC simulations. Our paper indicates, furthermore, that the LGSD is critically sensitive to the details of the evaluation process, and raises the possibility that the differences among the LGSD results from different sources may originate from differences in the detection of small angle grain boundaries.
Neuropsychological constraints to human data production on a global scale
NASA Astrophysics Data System (ADS)
Gros, C.; Kaczor, G.; Marković, D.
2012-01-01
Which are the factors underlying human information production on a global level? In order to gain an insight into this question we study a corpus of 252-633 mil. publicly available data files on the Internet corresponding to an overall storage volume of 284-675 Terabytes. Analyzing the file size distribution for several distinct data types we find indications that the neuropsychological capacity of the human brain to process and record information may constitute the dominant limiting factor for the overall growth of globally stored information, with real-world economic constraints having only a negligible influence. This supposition draws support from the observation that the files size distributions follow a power law for data without a time component, like images, and a log-normal distribution for multimedia files, for which time is a defining qualia.
Size Evolution and Stochastic Models: Explaining Ostracod Size through Probabilistic Distributions
NASA Astrophysics Data System (ADS)
Krawczyk, M.; Decker, S.; Heim, N. A.; Payne, J.
2014-12-01
The biovolume of animals has functioned as an important benchmark for measuring evolution throughout geologic time. In our project, we examined the observed average body size of ostracods over time in order to understand the mechanism of size evolution in these marine organisms. The body size of ostracods has varied since the beginning of the Ordovician, where the first true ostracods appeared. We created a stochastic branching model to create possible evolutionary trees of ostracod size. Using stratigraphic ranges for ostracods compiled from over 750 genera in the Treatise on Invertebrate Paleontology, we calculated overall speciation and extinction rates for our model. At each timestep in our model, new lineages can evolve or existing lineages can become extinct. Newly evolved lineages are assigned sizes based on their parent genera. We parameterized our model to generate neutral and directional changes in ostracod size to compare with the observed data. New sizes were chosen via a normal distribution, and the neutral model selected new sizes differentials centered on zero, allowing for an equal chance of larger or smaller ostracods at each speciation. Conversely, the directional model centered the distribution on a negative value, giving a larger chance of smaller ostracods. Our data strongly suggests that the overall direction of ostracod evolution has been following a model that directionally pushes mean ostracod size down, shying away from a neutral model. Our model was able to match the magnitude of size decrease. Our models had a constant linear decrease while the actual data had a much more rapid initial rate followed by a constant size. The nuance of the observed trends ultimately suggests a more complex method of size evolution. In conclusion, probabilistic methods can provide valuable insight into possible evolutionary mechanisms determining size evolution in ostracods.
Chen, Hua; Chen, Kun
2013-01-01
The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n − An(t) follows a Poisson distribution, and as m → n, n(n−1)Tm/2N(0) follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference. PMID:23666939
Chen, Hua; Chen, Kun
2013-07-01
The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.
NASA Astrophysics Data System (ADS)
Alves, S. G.; Martins, M. L.
2010-09-01
Aggregation of animal cells in culture comprises a series of motility, collision and adhesion processes of basic relevance for tissue engineering, bioseparations, oncology research and in vitro drug testing. In the present paper, a cluster-cluster aggregation model with stochastic particle replication and chemotactically driven motility is investigated as a model for the growth of animal cells in culture. The focus is on the scaling laws governing the aggregation kinetics. Our simulations reveal that in the absence of chemotaxy the mean cluster size and the total number of clusters scale in time as stretched exponentials dependent on the particle replication rate. Also, the dynamical cluster size distribution functions are represented by a scaling relation in which the scaling function involves a stretched exponential of the time. The introduction of chemoattraction among the particles leads to distribution functions decaying as power laws with exponents that decrease in time. The fractal dimensions and size distributions of the simulated clusters are qualitatively discussed in terms of those determined experimentally for several normal and tumoral cell lines growing in culture. It is shown that particle replication and chemotaxy account for the simplest cluster size distributions of cellular aggregates observed in culture.
Microfracture spacing distributions and the evolution of fracture patterns in sandstones
NASA Astrophysics Data System (ADS)
Hooker, J. N.; Laubach, S. E.; Marrett, R.
2018-03-01
Natural fracture patterns in sandstone were sampled using scanning electron microscope-based cathodoluminescence (SEM-CL) imaging. All fractures are opening-mode and are fully or partially sealed by quartz cement. Most sampled fractures are too small to be height-restricted by sedimentary layers. At very low strains (<∼0.001), fracture spatial distributions are indistinguishable from random, whereas at higher strains, fractures are generally statistically clustered. All 12 large (N > 100) datasets show spacings that are best fit by log-normal size distributions, compared to exponential, power law, or normal distributions. The clustering of fractures suggests that the locations of natural factures are not determined by a random process. To investigate natural fracture localization, we reconstructed the opening history of a cluster of fractures within the Huizachal Group in northeastern Mexico, using fluid inclusions from synkinematic cements and thermal-history constraints. The largest fracture, which is the only fracture in the cluster visible to the naked eye, among 101 present, opened relatively late in the sequence. This result suggests that the growth of sets of fractures is a self-organized process, in which small, initially isolated fractures grow and progressively interact, with preferential growth of a subset of fractures developing at the expense of growth of the rest. Size-dependent sealing of fractures within sets suggests that synkinematic cementation may contribute to fracture clustering.
Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements
NASA Astrophysics Data System (ADS)
Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.
2014-11-01
We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
NASA Astrophysics Data System (ADS)
Kaitna, Roland; Palucis, Marisa C.; Yohannes, Bereket; Hill, Kimberly M.; Dietrich, William E.
2016-02-01
Debris flows are typically a saturated mixture of poorly sorted particles and interstitial fluid, whose density and flow properties depend strongly on the presence of suspended fine sediment. Recent research suggests that grain size distribution (GSD) influences excess pore pressures (i.e., pressure in excess of predicted hydrostatic pressure), which in turn plays a governing role in debris flow behaviors. We report a series of controlled laboratory experiments in a 4 m diameter vertically rotating drum where the coarse particle size distribution and the content of fine particles were varied independently. We measured basal pore fluid pressures, pore fluid pressure profiles (using novel sensor probes), velocity profiles, and longitudinal profiles of the flow height. Excess pore fluid pressure was significant for mixtures with high fines fraction. Such flows exhibited lower values for their bulk flow resistance (as measured by surface slope of the flow), had damped fluctuations of normalized fluid pressure and normal stress, and had velocity profiles where the shear was concentrated at the base of the flow. These effects were most pronounced in flows with a wide coarse GSD distribution. Sustained excess fluid pressure occurred during flow and after cessation of motion. Various mechanisms may cause dilation and contraction of the flows, and we propose that the sustained excess fluid pressures during flow and once the flow has stopped may arise from hindered particle settling and yield strength of the fluid, resulting in transfer of particle weight to the fluid. Thus, debris flow behavior may be strongly influenced by sustained excess fluid pressures controlled by particle settling rates.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
Investigation of the milling capabilities of the F10 Fine Grind mill using Box-Behnken designs.
Tan, Bernice Mei Jin; Tay, Justin Yong Soon; Wong, Poh Mun; Chan, Lai Wah; Heng, Paul Wan Sia
2015-01-01
Size reduction or milling of the active is often the first processing step in the design of a dosage form. The ability of a mill to convert coarse crystals into the target size and size distribution efficiently is highly desirable as the quality of the final pharmaceutical product after processing is often still dependent on the dimensional attributes of its component constituents. The F10 Fine Grind mill is a mechanical impact mill designed to produce unimodal mid-size particles by utilizing a single-pass two-stage size reduction process for fine grinding of raw materials needed in secondary processing. Box-Behnken designs were used to investigate the effects of various mill variables (impeller, blower and feeder speeds and screen aperture size) on the milling of coarse crystals. Response variables included the particle size parameters (D10, D50 and D90), span and milling rate. Milled particles in the size range of 5-200 μm, with D50 ranging from 15 to 60 μm, were produced. The impeller and feeder speeds were the most critical factors influencing the particle size and milling rate, respectively. Size distributions of milled particles were better described by their goodness-of-fit to a log-normal distribution (i.e. unimodality) rather than span. Milled particles with symmetrical unimodal distributions were obtained when the screen aperture size was close to the median diameter of coarse particles employed. The capacity for high throughput milling of particles to a mid-size range, which is intermediate between conventional mechanical impact mills and air jet mills, was demonstrated in the F10 mill. Prediction models from the Box-Behnken designs will aid in providing a better guide to the milling process and milled product characteristics. Copyright © 2014 Elsevier B.V. All rights reserved.
Design method for multi-user workstations utilizing anthropometry and preference data.
Mahoney, Joseph M; Kurczewski, Nicolas A; Froede, Erick W
2015-01-01
Past efforts have been made to design single-user workstations to accommodate users' anthropometric and preference distributions. However, there is a lack of methods for designing workstations for group interaction. This paper introduces a method for sizing workstations to allow for a personal work area for each user and a shared space for adjacent users. We first create a virtual population with the same anthropometric and preference distributions as an intended demographic of college-aged students. Members of the virtual population are randomly paired to test if their extended reaches overlap but their normal reaches do not. This process is repeated in a Monte Carlo simulation to estimate the total percentage of groups in the population that will be accommodated for a workstation size. We apply our method to two test cases: in the first, we size polygonal workstations for two populations and, in the second, we dimension circular workstations for different group sizes. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Haule, Kamila; Freda, Włodzimierz
2016-04-01
Oil pollution in seawater, primarily visible on sea surface, becomes dispersed as an effect of wave mixing as well as chemical dispersant treatment, and forms spherical oil droplets. In this study, we examined the influence of oil droplet size of highly dispersed Petrobaltic crude on the underwater visible light flux and the inherent optical properties (IOPs) of seawater, including absorption, scattering, backscattering and attenuation coefficients. On the basis of measured data and Mie theory, we calculated the IOPs of dispersed Petrobaltic crude oil in constant concentration, but different log-normal size distributions. We also performed a radiative transfer analysis, in order to evaluate the influence on the downwelling irradiance Ed, remote sensing reflectance Rrs and diffuse reflectance R, using in situ data from the Baltic Sea. We found that during dispersion, there occurs a boundary size distribution characterized by a peak diameter d0 = 0.3 μm causing a maximum E d increase of 40% within 0.5-m depth, and the maximum Ed decrease of 100% at depths below 5 m. Moreover, we showed that the impact of size distribution on the "blue to green" ratios of Rrs and R varies from 24% increase to 27% decrease at the same crude oil concentration.
Limpert, Eckhard; Stahel, Werner A.
2011-01-01
Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325
Limpert, Eckhard; Stahel, Werner A
2011-01-01
The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.
Fowler, Mike S; Ruokolainen, Lasse
2013-01-01
The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let the characteristics of known natural environmental covariates (e.g., colour and distribution shape) guide us in our choice of how to best model the impact of coloured environmental variation on population dynamics.
Infurna, Frank J; Grimm, Kevin J
2017-12-15
Growth mixture modeling (GMM) combines latent growth curve and mixture modeling approaches and is typically used to identify discrete trajectories following major life stressors (MLS). However, GMM is often applied to data that does not meet the statistical assumptions of the model (e.g., within-class normality) and researchers often do not test additional model constraints (e.g., homogeneity of variance across classes), which can lead to incorrect conclusions regarding the number and nature of the trajectories. We evaluate how these methodological assumptions influence trajectory size and identification in the study of resilience to MLS. We use data on changes in subjective well-being and depressive symptoms following spousal loss from the HILDA and HRS. Findings drastically differ when constraining the variances to be homogenous versus heterogeneous across trajectories, with overextraction being more common when constraining the variances to be homogeneous across trajectories. In instances, when the data are non-normally distributed, assuming normally distributed data increases the extraction of latent classes. Our findings showcase that the assumptions typically underlying GMM are not tenable, influencing trajectory size and identification and most importantly, misinforming conceptual models of resilience. The discussion focuses on how GMM can be leveraged to effectively examine trajectories of adaptation following MLS and avenues for future research. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B
2013-03-01
Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.
NASA Astrophysics Data System (ADS)
Shekar, Yamini
This research investigates the nano-scale pore structure of cementitious mortars undergoing delayed ettringite formation (DEF) using small angle x-ray scattering (SAXS). DEF has been known to cause expansion and cracking during later ages (around 4000 days) in concrete that has been heat cured at temperatures of 70°C or above. Though DEF normally occurs in heat cured concrete, mass cured concrete can also experience DEF. Large crystalline pressures result in smaller pore sizes. The objectives of this research are: (1) to investigate why some samples expand early than later expansion, (2) to evaluate the effects of curing conditions and pore size distributions at high temperatures, and (3) to assess the evolution of the pore size distributions over time. The most important outcome of the research is the pore sizes obtained from SAXS were used in the development of a 3-stage model. From the data obtained, the pore sizes increase in stage 1 due to initial ettringite formation and in turn filling up the smallest pores. Once the critical pore size threshold is reached (around 20nm) stage 2 is formed due to cracking which tends to decrease in the pore sizes. Finally, in stage 3, the cracking continues, therefore increasing in the pore size.
Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir
2008-04-01
We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.
Detection of vapor nanobubbles by small angle neutron scattering (SANS)
NASA Astrophysics Data System (ADS)
Popov, Emilian; He, Lilin; Dominguez-Ontiveros, Elvis; Melnichenko, Yuri
2018-04-01
Experiments using boiling water on untreated (roughness 100-300 nm) metal surfaces using small-angle neutron scattering (SANS) show the appearance of structures that are 50-70 nm in size when boiling is present. The scattering signal disappears when the boiling ceases, and no change in the signal is detected at any surface temperature condition below saturation. This confirms that the signal is caused by vapor nanobubbles. Two boiling regimes are evaluated herein that differ by the degree of subcooling (3-10 °C). A polydisperse spherical model with a log-normal distribution fits the SANS data well. The size distribution indicates that a large number of nanobubbles exist on the surface during boiling, and some of them grow into large bubbles.
Sub- and supercritical jet disintegration
NASA Astrophysics Data System (ADS)
DeSouza, Shaun; Segal, Corin
2017-04-01
Shadowgraph visualization and Planar Laser Induced Fluorescence (PLIF) are applied to single orifice injection in the same facility and same fluid conditions to analyze sub- to supercritical jet disintegration and mixing. The comparison includes jet disintegration and lateral spreading angle. The results indicate that the shadowgraph data are in agreement with previous visualization studies but differ from the PLIF results that provided quantitative measurement of central jet plane density and density gradients. The study further evaluated the effect of thermodynamic conditions on droplet production and quantified droplet size and distribution. The results indicate an increase in the normalized drop diameter and a decrease in the droplet population with increasing chamber temperatures. Droplet size and distribution were found to be independent of chamber pressure.
Dislocation, crystallite size distribution and lattice strain of magnesium oxide nanoparticles
NASA Astrophysics Data System (ADS)
Sutapa, I. W.; Wahid Wahab, Abdul; Taba, P.; Nafie, N. L.
2018-03-01
The oxide of magnesium nanoparticles synthesized using sol-gel method and analysis of the structural properties was conducted. The functional groups of nanoparticles has been analysed by Fourier Transform Infrared Spectroscopy (FT-IR). Dislocations, average size of crystal, strain, stress, the energy density of crystal, crystallite size distribution and morphologies of the crystals were determined based on X-ray diffraction profile analysis. The morphological of the crystal was analysed based on the image resulted from SEM analysis. The crystallite size distribution was calculated with the contention that the particle size has a normal logarithmic form. The most orientations of crystal were determined based on the textural crystal from diffraction data of X-ray diffraction profile analysis. FT-IR results showed the stretching vibration mode of the Mg-O-Mg in the range of 400.11-525 cm-1 as a broad band. The average size crystal of nanoparticles resulted is 9.21 mm with dislocation value of crystal is 0.012 nm-2. The strains, stress, the energy density of crystal are 1.5 x 10-4 37.31 MPa; 0.72 MPa respectively. The highest texture coefficient value of the crystal is 0.98. This result is supported by morphological analysis using SEM which shows most of the regular cubic-shaped crystals. The synthesis method is suitable for simple and cost-effective synthesis model of MgO nanoparticles.
Optimum size of nanorods for heating application
NASA Astrophysics Data System (ADS)
Seshadri, G.; Thaokar, Rochish; Mehra, Anurag
2014-08-01
Magnetic nanoparticles (MNP's) have become increasingly important in heating applications such as hyperthermia treatment of cancer due to their ability to release heat when a remote external alternating magnetic field is applied. It has been shown that the heating capability of such particles varies significantly with the size of particles used. In this paper, we theoretically evaluate the heating capability of rod-shaped MNP's and identify conditions under which these particles display highest efficiency. For optimally sized monodisperse particles, the power generated by rod-shaped particles is found to be equal to that generated by spherical particles. However, for particles which are not mono dispersed, rod-shaped particles are found to be more effective in heating as a result of the greater spread in the power density distribution curve. Additionally, for rod-shaped particles, a dispersion in the radius of the particle contributes more to the reduction in loss power when compared to a dispersion in the length. We further identify the optimum size, i.e the radius and length of nanorods, given a bi-variate log-normal distribution of particle size in two dimensions.
Zhou, Wen; Wang, Guifen; Li, Cai; Xu, Zhantang; Cao, Wenxi; Shen, Fang
2017-10-20
Phytoplankton cell size is an important property that affects diverse ecological and biogeochemical processes, and analysis of the absorption and scattering spectra of phytoplankton can provide important information about phytoplankton size. In this study, an inversion method for extracting quantitative phytoplankton cell size data from these spectra was developed. This inversion method requires two inputs: chlorophyll a specific absorption and scattering spectra of phytoplankton. The average equivalent-volume spherical diameter (ESD v ) was calculated as the single size approximation for the log-normal particle size distribution (PSD) of the algal suspension. The performance of this method for retrieving cell size was assessed using the datasets from cultures of 12 phytoplankton species. The estimations of a(λ) and b(λ) for the phytoplankton population using ESD v had mean error values of 5.8%-6.9% and 7.0%-10.6%, respectively, compared to the a(λ) and b(λ) for the phytoplankton populations using the log-normal PSD. The estimated values of C i ESD v were in good agreement with the measurements, with r 2 =0.88 and relative root mean square error (NRMSE)=25.3%, and relatively good performances were also found for the retrieval of ESD v with r 2 =0.78 and NRMSE=23.9%.
Monte Carlo modeling of the scatter radiation doses in IR
NASA Astrophysics Data System (ADS)
Mah, Eugene; He, Wenjun; Huda, Walter; Yao, Hai; Selby, Bayne
2011-03-01
Purpose: To use Monte Carlo techniques to compute the scatter radiation dose distribution patterns around patients undergoing Interventional Radiological (IR) examinations. Method: MCNP was used to model the scatter radiation air kerma (AK) per unit kerma area product (KAP) distribution around a 24 cm diameter water cylinder irradiated with monoenergetic x-rays. Normalized scatter fractions (SF) were generated defined as the air kerma at a point of interest that has been normalized by the Kerma Area Product incident on the phantom (i.e., AK/KAP). Three regions surrounding the water cylinder were investigated consisting of the area below the water cylinder (i.e., backscatter), above the water cylinder (i.e., forward scatter) and to the sides of the water cylinder (i.e., side scatter). Results: Immediately above and below the water cylinder and in the side scatter region, values of normalized SF decreased with the inverse square of the distance. For z-planes further away, the decrease was exponential. Values of normalized SF around the phantom were generally less than 10-4. Changes in normalized SF with x-ray energy were less than 20% and generally decreased with increasing x-ray energy. At a given distance from region where the x-ray beam enters the phantom, the normalized SF was higher in the backscatter regions, and smaller in the forward scatter regions. The ratio of forward to back scatter normalized SF was lowest at 60 keV and highest at 120 keV. Conclusion: Computed SF values quantify the normalized fractional radiation intensities at the operator location relative to the radiation intensities incident on the patient, where the normalization refers to the beam area that is incident on the patient. SF values can be used to estimate the radiation dose received by personnel within the procedure room, and which depend on the imaging geometry, patient size and location within the room. Monte Carlo techniques have the potential for simulating normalized SF values for any arrangement of imaging geometry, patient size and personnel location and are therefore an important tool for minimizing operator doses in IR.
Modified Distribution-Free Goodness-of-Fit Test Statistic.
Chun, So Yeon; Browne, Michael W; Shapiro, Alexander
2018-03-01
Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.
Sub-wavelength grating structure on the planar waveguide (Conference Presentation)
NASA Astrophysics Data System (ADS)
Qing-Song, Zhu; Sheng-Hui, Chen
2016-10-01
Making progress in recent years, with the technology of the grating, the grating period can be reduced to shrink the size of the light coupler on a waveguide. The working wavelength of the light coupler can be in the range from the near-infrared to visible. In this study , we used E-gun evaporation system with ion-beam-assisted deposition system to fabricate bottom cladding (SiO2), guiding layer (Ta2O5) and Distributed Bragg Reflector(DBR) of the waveguide on the silicon substrate. Electron-beam lithography is used to make sub-wavelength gratings and reflector grating on the planar waveguide which is a coupling device on the guiding layer. The best fabrication parameters were analyzed to deposit the film. The exposure and development times also influenced to fabricate the grating quality. The purpose is to reduce the device size and enhance coupling efficiency which maintain normal incidence of the light . We designed and developed the device using the Finite-Difference Time-Domain (FDTD) method. The grating period, depth, fill factor, film thickness, Distributed Bragg Reflector(DBR) numbers and reflector grating period have been discussed to enhance coupling efficiency and maintained normal incidence of the light. According to the simulation results, when the wavelength is 1300 nm, the coupling grating period is 720 nm and the Ta2O5 film is 460 nm with 360 nm of reflector grating period and 2 layers of Distributed Bragg Reflector, which had the optimum coupling efficiency and normal incidence angle. In the measurement, We successfully measured the TE wave coupling efficiency of the photoresist grating coupling device.
Bergamaschi, B.A.; Tsamakis, E.; Keil, R.G.; Eglinton, T.I.; Montlucon, D.B.; Hedges, J.I.
1997-01-01
A C-rich sediment sample from the Peru Margin was sorted into nine hydrodynamically-determined grain size fractions to explore the effect of grain size distribution and sediment surface area on organic matter content and composition. The neutral monomeric carbohydrate composition, lignin oxidation product yields, total organic carbon, and total nitrogen contents were determined independently for each size fraction, in addition to sediment surface area and abundance of biogenic opal. The percent organic carbon and percent total nitrogen were strongly related to surface area in these sediments. In turn, the distribution of surface area closely followed mass distribution among the textural size classes, suggesting hydrodynamic controls on grain size also control organic carbon content. Nevertheless, organic compositional distinctions were observed between textural size classes. Total neutral carbohydrate yields in the Peru Margin sediments were found to closely parallel trends in total organic carbon, increasing in abundance among grain size fractions in proportion to sediment surface area. Coincident with the increases in absolute abundance, rhamnose and mannose increased as a fraction of the total carbohydrate yield in concert with surface area, indicating these monomers were preferentially represented in carbohydrates associated with surfaces. Lignin oxidation product yields varied with surface area when normalized to organic carbon, suggesting that the terrestrially-derived component may be diluted by sorption of marine derived material. Lignin-based parameters suggest a separate source for terrestrially derived material associated with sand-size material as opposed to that associated with silts and clays. Copyright ?? 1997 Elsevier Science Ltd.
NASA Astrophysics Data System (ADS)
Bergamaschi, Brian A.; Tsamakis, Elizabeth; Keil, Richard G.; Eglinton, Timothy I.; Montluçon, Daniel B.; Hedges, John I.
1997-03-01
A C-rich sediment sample from the Peru Margin was sorted into nine hydrodynamically-determined grain size fractions to explore the effect of grain size distribution and sediment surface area on organic matter content and composition. The neutral monomeric carbohydrate composition, lignin oxidation product yields, total organic carbon, and total nitrogen contents were determined independently for each size fraction, in addition to sediment surface area and abundance of biogenic opal. The percent organic carbon and percent total nitrogen were strongly related to surface area in these sediments. In turn, the distribution of surface area closely followed mass distribution among the textural size classes, suggesting hydrodynamic controls on grain size also control organic carbon content. Nevertheless, organic compositional distinctions were observed between textural size classes. Total neutral carbohydrate yields in the Peru Margin sediments were found to closely parallel trends in total organic carbon, increasing in abundance among grain size fractions in proportion to sediment surface area. Coincident with the increases in absolute abundance, rhamnose and mannose increased as a fraction of the total carbohydrate yield in concert with surface area, indicating these monomers were preferentially represented in carbohydrates associated with surfaces. Lignin oxidation product yields varied with surface area when normalized to organic carbon, suggesting that the terrestrially-derived component may be diluted by sorption of marine derived material. Lignin-based parameters suggest a separate source for terrestrially derived material associated with sand-size material as opposed to that associated with silts and clays.
NASA Astrophysics Data System (ADS)
Rufeil-Fiori, Elena; Banchio, Adolfo J.
Lipid monolayers with phase coexistence are a frequently used model for lipid membranes. In these systems, domains of the liquid-condensed phase always present size polydispersity. However, very few theoretical works consider size distribution effects on the monolayer properties. Because of the difference in surface densities, domains have excess dipolar density with respect to the surrounding liquid expanded phase, originating a dipolar inter-domain interaction. This interaction depends on the domain area, and hence the presence of a domain size distribution is associated with interaction polydispersity. Inter-domain interactions are fundamental to understanding the structure and dynamics of the monolayer. For this reason, it is expected that polydispersity significantly alters monolayer properties. By means of Brownian dynamics simulations, we study the radial distribution function (RDF), the average mean square displacement and the average time-dependent self-diffusion coefficient, D(t), of lipid monolayers with normal distributed size domains. It was found that polydispersity strongly affects the value of the interaction strength obtained, which is greatly underestimated if polydispersity is not considered. However, within a certain range of parameters, the RDF obtained from a polydisperse model can be well approximated by that of a monodisperse model, suitably fitting the interaction strength, even for 40% polydispersities. For small interaction strengths or small polydispersities, the polydisperse systems obtained from fitting the experimental RDF have an average mean square displacement and D(t) in good agreement with that of the monodisperse system.
NASA Astrophysics Data System (ADS)
Barbarino, M.; Warrens, M.; Bonasera, A.; Lattuada, D.; Bang, W.; Quevedo, H. J.; Consoli, F.; de Angelis, R.; Andreoli, P.; Kimura, S.; Dyer, G.; Bernstein, A. C.; Hagel, K.; Barbui, M.; Schmidt, K.; Gaul, E.; Donovan, M. E.; Natowitz, J. B.; Ditmire, T.
2016-08-01
In this work, we explore the possibility that the motion of the deuterium ions emitted from Coulomb cluster explosions is highly disordered enough to resemble thermalization. We analyze the process of nuclear fusion reactions driven by laser-cluster interactions in experiments conducted at the Texas Petawatt laser facility using a mixture of D2+3He and CD4+3He cluster targets. When clusters explode by Coulomb repulsion, the emission of the energetic ions is “nearly” isotropic. In the framework of cluster Coulomb explosions, we analyze the energy distributions of the ions using a Maxwell-Boltzmann (MB) distribution, a shifted MB distribution (sMB), and the energy distribution derived from a log-normal (LN) size distribution of clusters. We show that the first two distributions reproduce well the experimentally measured ion energy distributions and the number of fusions from d-d and d-3He reactions. The LN distribution is a good representation of the ion kinetic energy distribution well up to high momenta where the noise becomes dominant, but overestimates both the neutron and the proton yields. If the parameters of the LN distributions are chosen to reproduce the fusion yields correctly, the experimentally measured high energy ion spectrum is not well represented. We conclude that the ion kinetic energy distribution is highly disordered and practically not distinguishable from a thermalized one.
NASA Astrophysics Data System (ADS)
Santana, Steven Michael; Antonyak, Marc A.; Cerione, Richard A.; Kirby, Brian J.
2014-12-01
Extracellular shed vesicles (ESVs) facilitate a unique mode of cell-cell communication wherein vesicle uptake can induce a change in the recipient cell's state. Despite the intensity of ESV research, currently reported data represent the bulk characterization of concentrated vesicle samples with little attention paid to heterogeneity. ESV populations likely represent diversity in mechanisms of formation, cargo and size. To better understand ESV subpopulations and the signaling cascades implicated in their formation, we characterize ESV size distributions to identify subpopulations in normal and cancerous epithelial cells. We have discovered that cancer cells exhibit bimodal ESV distributions, one small-diameter and another large-diameter population, suggesting that two mechanisms may govern ESV formation, an exosome population and a cancer-specific microvesicle population. Altered glutamine metabolism in cancer is thought to fuel cancer growth but may also support metastatic niche formation through microvesicle production. We describe the role of a glutaminase inhibitor, compound 968, in ESV production. We have discovered that inhibiting glutamine metabolism significantly impairs large-diameter microvesicle production in cancer cells.
NASA Technical Reports Server (NTRS)
Crutcher, H. L.; Falls, L. W.
1976-01-01
Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.
Higher incidence of small Y chromosome in humans with trisomy 21 (Down syndrome).
Verma, R S; Huq, A; Madahar, C; Qazi, Q; Dosik, H
1982-09-01
The length of the Y chromosome was measured in 42 black patients with trisomy 21 (47,XY,+21) and a similar number of normal individuals of American black ancestry. The length of the Y was expressed as a function of Y/F ratio and arbitrarily classified into five groups using subjectively defined criteria as follows: very small, small, average, large, and very large. Thirty-eight % of the trisomy 21 patients had small or very small Ys compared to 2.38% of the controls (P less than 0.01). In both populations the size of the Y was not normally distributed. In the normals it was skewed to the left, whereas in the Downs the distribution was flat (platykurtic). A significantly higher incidence of Y length heteromorphisms was noted in the Down as compared to the normal black population. In the light of our current understanding that about one-third of all trisomy 21 patients are due to paternal nondisjunction, it may be tempting to speculate that males with small Y are at an increased risk for nondisjunction of the 21 chromosome.
NASA Astrophysics Data System (ADS)
Fan, Daidu; Tu, Junbiao; Cai, Guofu; Shang, Shuai
2015-06-01
Grain-size analysis is a basic routine in sedimentology and related fields, but diverse methods of sample collection, processing and statistical analysis often make direct comparisons and interpretations difficult or even impossible. In this paper, 586 published grain-size datasets from the Qiantang Estuary (East China Sea) sampled and analyzed by the same procedures were merged and their textural parameters calculated by a percentile and two moment methods. The aim was to explore which of the statistical procedures performed best in the discrimination of three distinct sedimentary units on the tidal flats of the middle Qiantang Estuary. A Gaussian curve-fitting method served to simulate mixtures of two normal populations having different modal sizes, sorting values and size distributions, enabling a better understanding of the impact of finer tail components on textural parameters, as well as the proposal of a unifying descriptive nomenclature. The results show that percentile and moment procedures yield almost identical results for mean grain size, and that sorting values are also highly correlated. However, more complex relationships exist between percentile and moment skewness (kurtosis), changing from positive to negative correlations when the proportions of the finer populations decrease below 35% (10%). This change results from the overweighting of tail components in moment statistics, which stands in sharp contrast to the underweighting or complete amputation of small tail components by the percentile procedure. Intercomparisons of bivariate plots suggest an advantage of the Friedman & Johnson moment procedure over the McManus moment method in terms of the description of grain-size distributions, and over the percentile method by virtue of a greater sensitivity to small variations in tail components. The textural parameter scalings of Folk & Ward were translated into their Friedman & Johnson moment counterparts by application of mathematical functions derived by regression analysis of measured and modeled grain-size data, or by determining the abscissa values of intersections between auxiliary lines running parallel to the x-axis and vertical lines corresponding to the descriptive percentile limits along the ordinate of representative bivariate plots. Twofold limits were extrapolated for the moment statistics in relation to single descriptive terms in the cases of skewness and kurtosis by considering both positive and negative correlations between percentile and moment statistics. The extrapolated descriptive scalings were further validated by examining entire size-frequency distributions simulated by mixing two normal populations of designated modal size and sorting values, but varying in mixing ratios. These were found to match well in most of the proposed scalings, although platykurtic and very platykurtic categories were questionable when the proportion of the finer population was below 5%. Irrespective of the statistical procedure, descriptive nomenclatures should therefore be cautiously used when tail components contribute less than 5% to grain-size distributions.
Regulation of Synaptic Structure by the Ubiquitin C-terminal Hydrolase UCH-L1
Cartier, Anna E.; Djakovic, Stevan N.; Salehi, Afshin; Wilson, Scott M.; Masliah, Eliezer; Patrick, Gentry N.
2009-01-01
UCH-L1 is a de-ubiquitinating enzyme that is selectively and abundantly expressed in the brain, and its activity is required for normal synaptic function. Here, we show that UCH-L1 functions in maintaining normal synaptic structure in hippocampal neurons. We have found that UCH-L1 activity is rapidly up-regulated by NMDA receptor activation which leads to an increase in the levels of free monomeric ubiquitin. Conversely, pharmacological inhibition of UCH-L1 significantly reduces monomeric ubiquitin levels and causes dramatic alterations in synaptic protein distribution and spine morphology. Inhibition of UCH-L1 activity increases spine size while decreasing spine density. Furthermore, there is a concomitant increase in the size of pre and postsynaptic protein clusters. Interestingly, however, ectopic expression of ubiquitin restores normal synaptic structure in UCH-L1 inhibited neurons. These findings point to a significant role of UCH-L1 in synaptic remodeling most likely by modulating free monomeric ubiquitin levels in an activity-dependent manner. PMID:19535597
Regulation of synaptic structure by ubiquitin C-terminal hydrolase L1.
Cartier, Anna E; Djakovic, Stevan N; Salehi, Afshin; Wilson, Scott M; Masliah, Eliezer; Patrick, Gentry N
2009-06-17
Ubiquitin C-terminal hydrolase L1 (UCH-L1) is a deubiquitinating enzyme that is selectively and abundantly expressed in the brain, and its activity is required for normal synaptic function. Here, we show that UCH-L1 functions in maintaining normal synaptic structure in hippocampal neurons. We found that UCH-L1 activity is rapidly upregulated by NMDA receptor activation, which leads to an increase in the levels of free monomeric ubiquitin. Conversely, pharmacological inhibition of UCH-L1 significantly reduces monomeric ubiquitin levels and causes dramatic alterations in synaptic protein distribution and spine morphology. Inhibition of UCH-L1 activity increases spine size while decreasing spine density. Furthermore, there is a concomitant increase in the size of presynaptic and postsynaptic protein clusters. Interestingly, however, ectopic expression of ubiquitin restores normal synaptic structure in UCH-L1-inhibited neurons. These findings point to a significant role of UCH-L1 in synaptic remodeling, most likely by modulating free monomeric ubiquitin levels in an activity-dependent manner.
NASA Astrophysics Data System (ADS)
Ren, Jianlin; Cao, Xiaodong; Liu, Junjie
2018-04-01
Passengers usually spend hours in the airport terminal buildings waiting for their departure. During the long waiting period, ambient fine particles (PM2.5) and ultrafine particles (UFP) generated by airliners may penetrate into terminal buildings through open doors and the HVAC system. However, limited data are available on passenger exposure to particulate pollutants in terminal buildings. We conducted on-site measurements on PM2.5 and UFP concentration and the particle size distribution in the terminal building of Tianjin Airport, China during three different seasons. The results showed that the PM2.5 concentrations in the terminal building were considerably larger than the values guided by Chinese standard and WHO on all of the tested seasons, and the conditions were significantly affected by the outdoor air (Spearman test, p < 0.01). The indoor/outdoor PM2.5 ratios (I/O) ranged from 0.67 to 0.84 in the arrival hall and 0.79 to 0.96 in the departure hall. The particle number concentration in the terminal building presented a bi-modal size distribution, with one mode being at 30 nm and another mode at 100 nm. These results were totally different from the size distribution measured in a normal urban environment. The total UFP exposure during the whole waiting period (including in the terminal building and airliner cabin) of a passenger is approximately equivalent to 11 h of exposure to normal urban environments. This study is expected to contribute to the improvement of indoor air quality and health of passengers in airport terminal buildings.
Liu, Jun-Li; Coschigano, Karen T; Robertson, Katie; Lipsett, Mark; Guo, Yubin; Kopchick, John J; Kumar, Ujendra; Liu, Ye Lauren
2004-09-01
Growth hormone, acting through its receptor (GHR), plays an important role in carbohydrate metabolism and in promoting postnatal growth. GHR gene-deficient (GHR(-/-)) mice exhibit severe growth retardation and proportionate dwarfism. To assess the physiological relevance of growth hormone actions, GHR(-/-) mice were used to investigate their phenotype in glucose metabolism and pancreatic islet function. Adult GHR(-/-) mice exhibited significant reductions in the levels of blood glucose and insulin, as well as insulin mRNA accumulation. Immunohistochemical analysis of pancreatic sections revealed normal distribution of the islets despite a significantly smaller size. The average size of the islets found in GHR(-/-) mice was only one-third of that in wild-type littermates. Total beta-cell mass was reduced 4.5-fold in GHR(-/-) mice, significantly more than their body size reduction. This reduction in pancreatic islet mass appears to be related to decreases in proliferation and cell growth. GHR(-/-) mice were different from the human Laron syndrome in serum insulin level, insulin responsiveness, and obesity. We conclude that growth hormone signaling is essential for maintaining pancreatic islet size, stimulating islet hormone production, and maintaining normal insulin sensitivity and glucose homeostasis.
Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G
2012-10-01
Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.
How Significant Is a Boxplot Outlier?
ERIC Educational Resources Information Center
Dawson, Robert
2011-01-01
It is common to consider Tukey's schematic ("full") boxplot as an informal test for the existence of outliers. While the procedure is useful, it should be used with caution, as at least 30% of samples from a normally-distributed population of any size will be flagged as containing an outlier, while for small samples (N less than 10) even extreme…
ERIC Educational Resources Information Center
Nevitt, Johnathan; Hancock, Gregory R.
Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…
ERIC Educational Resources Information Center
Sengul Avsar, Asiye; Tavsancil, Ezel
2017-01-01
This study analysed polytomous items' psychometric properties according to nonparametric item response theory (NIRT) models. Thus, simulated datasets--three different test lengths (10, 20 and 30 items), three sample distributions (normal, right and left skewed) and three samples sizes (100, 250 and 500)--were generated by conducting 20…
Pedagogical Simulation of Sampling Distributions and the Central Limit Theorem
ERIC Educational Resources Information Center
Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari
2007-01-01
Students often find the fact that a sample statistic is a random variable very hard to grasp. Even more mysterious is why a sample mean should become ever more Normal as the sample size increases. This simulation tool is meant to illustrate the process, thereby giving students some intuitive grasp of the relationship between a parent population…
ERIC Educational Resources Information Center
Edwards, Lynne K.; Meyers, Sarah A.
Correlation coefficients are frequently reported in educational and psychological research. The robustness properties and optimality among practical approximations when phi does not equal 0 with moderate sample sizes are not well documented. Three major approximations and their variations are examined: (1) a normal approximation of Fisher's Z,…
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Computer-assisted bladder cancer grading: α-shapes for color space decomposition
NASA Astrophysics Data System (ADS)
Niazi, M. K. K.; Parwani, Anil V.; Gurcan, Metin N.
2016-03-01
According to American Cancer Society, around 74,000 new cases of bladder cancer are expected during 2015 in the US. To facilitate the bladder cancer diagnosis, we present an automatic method to differentiate carcinoma in situ (CIS) from normal/reactive cases that will work on hematoxylin and eosin (H and E) stained images of bladder. The method automatically determines the color deconvolution matrix by utilizing the α-shapes of the color distribution in the RGB color space. Then, variations in the boundary of transitional epithelium are quantified, and sizes of nuclei in the transitional epithelium are measured. We also approximate the "nuclear to cytoplasmic ratio" by computing the ratio of the average shortest distance between transitional epithelium and nuclei to average nuclei size. Nuclei homogeneity is measured by computing the kurtosis of the nuclei size histogram. The results show that 30 out of 34 (88.2%) images were correctly classified by the proposed method, indicating that these novel features are viable markers to differentiate CIS from normal/reactive bladder.
Beyond the power law: Uncovering stylized facts in interbank networks
NASA Astrophysics Data System (ADS)
Vandermarliere, Benjamin; Karas, Alexei; Ryckebusch, Jan; Schoors, Koen
2015-06-01
We use daily data on bilateral interbank exposures and monthly bank balance sheets to study network characteristics of the Russian interbank market over August 1998-October 2004. Specifically, we examine the distributions of (un)directed (un)weighted degree, nodal attributes (bank assets, capital and capital-to-assets ratio) and edge weights (loan size and counterparty exposure). We search for the theoretical distribution that fits the data best and report the "best" fit parameters. We observe that all studied distributions are heavy tailed. The fat tail typically contains 20% of the data and can be mostly described well by a truncated power law. Also the power law, stretched exponential and log-normal provide reasonably good fits to the tails of the data. In most cases, however, separating the bulk and tail parts of the data is hard, so we proceed to study the full range of the events. We find that the stretched exponential and the log-normal distributions fit the full range of the data best. These conclusions are robust to (1) whether we aggregate the data over a week, month, quarter or year; (2) whether we look at the "growth" versus "maturity" phases of interbank market development; and (3) with minor exceptions, whether we look at the "normal" versus "crisis" operation periods. In line with prior research, we find that the network topology changes greatly as the interbank market moves from a "normal" to a "crisis" operation period.
Multiregion apodized photon sieve with enhanced efficiency and enlarged pinhole sizes.
Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng
2015-08-20
A novel multiregion structure apodized photon sieve is proposed. The number of regions, the apodization window values, and pinhole sizes of each pinhole ring are all optimized to enhance the energy efficiency and enlarge the pinhole sizes. The design theory and principle are thoroughly proposed and discussed. Two numerically designed apodized photon sieves with the same diameter are given as examples. Comparisons have shown that the multiregion apodized photon sieve has a 25.5% higher energy efficiency and the minimum pinhole size is enlarged by 27.5%. Meanwhile, the two apodized photon sieves have the same form of normalized intensity distribution at the focal plane. This method could improve the flexibility of the design and the fabrication the apodized photon sieve.
NASA Astrophysics Data System (ADS)
Xiong, S. Y.; Yang, J. G.; Zhuang, J.
2011-10-01
In this work, we use nonlinear spectral imaging based on two-photon excited fluorescence (TPEF) and second harmonic generation (SHG) for analyzing the morphology of collagen and elastin and their biochemical variations in basal cell carcinoma (BCC), squamous cell carcinoma (SCC) and normal skin tissue. It was found in this work that there existed apparent differences among BCC, SCC and normal skin in terms of their thickness of the keratin and epithelial layers, their size of elastic fibers, as well as their distribution and spectral characteristics of collagen. These differences can potentially be used to distinguish BCC and SCC from normal skin, and to discriminate between BCC and SCC, as well as to evaluate treatment responses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, M-D.
2000-08-23
Internal combustion engines are a major source of airborne particulate matter (PM). The size of the engine PM is in the sub-micrometer range. The number of engine particles per unit volume is high, normally in the range of 10{sup 12} to 10{sup 14}. To measure the size distribution of the engine particles dilution of an aerosol sample is required. A diluter utilizing a venturi ejector mixing technique is commercially available and tested. The purpose of this investigation was to determine if turbulence created by the ejector in the mini-dilutor changes the size of particles passing through it. The results ofmore » the NaCl aerosol experiments show no discernible difference in the geometric mean diameter and geometric standard deviation of particles passing through the ejector. Similar results were found for the DOP particles. The ratio of the total number concentrations before and after the ejector indicates that a dilution ratio of approximately 20 applies equally for DOP and NaCl particles. This indicates the dilution capability of the ejector is not affected by the particle composition. The statistical analysis results of the first and second moments of a distribution indicate that the ejector may not change the major parameters (e.g., the geometric mean diameter and geometric standard deviation) characterizing the size distributions of NaCl and DOP particles. However, when the skewness was examined, it indicates that the ejector modifies the particle size distribution significantly. The ejector could change the skewness of the distribution in an unpredictable and inconsistent manner. Furthermore, when the variability of particle counts in individual size ranges as a result of the ejector is examined, one finds that the variability is greater for DOP particles in the size range of 40-150 nm than for NaCl particles in the size range of 30 to 350 nm. The numbers or particle counts in this size region are high enough that the Poisson counting errors are small (<10%) compared with the tail regions. This result shows that the ejector device could have a higher bin-to-bin counting uncertainty for ''soft'' particles such as DOP than for a solid dry particle like NaCl. The results suggest that it may be difficult to precisely characterize the size distribution of particles ejected from the mini-dilution system if the particle is not solid.« less
NASA Astrophysics Data System (ADS)
Reed, Jason; Hsueh, Carlin; Mishra, Bud; Gimzewski, James K.
2008-09-01
We have used an atomic force microscope to examine a clinically derived sample of single-molecule gene transcripts, in the form of double-stranded cDNA, (c: complementary) obtained from human cardiac muscle without the use of polymerase chain reaction (PCR) amplification. We observed a log-normal distribution of transcript sizes, with most molecules being in the range of 0.4-7.0 kilobase pairs (kb) or 130-2300 nm in contour length, in accordance with the expected distribution of mRNA (m: messenger) sizes in mammalian cells. We observed novel branching structures not previously known to exist in cDNA, and which could have profound negative effects on traditional analysis of cDNA samples through cloning, PCR and DNA sequencing.
Influence of particle size distribution on nanopowder cold compaction processes
NASA Astrophysics Data System (ADS)
Boltachev, G.; Volkov, N.; Lukyashin, K.; Markov, V.; Chingina, E.
2017-06-01
Nanopowder uniform and uniaxial cold compaction processes are simulated by 2D granular dynamics method. The interaction of particles in addition to wide-known contact laws involves the dispersion forces of attraction and possibility of interparticle solid bridges formation, which have a large importance for nanopowders. Different model systems are investigated: monosized systems with particle diameter of 10, 20 and 30 nm; bidisperse systems with different content of small (diameter is 10 nm) and large (30 nm) particles; polydisperse systems corresponding to the log-normal size distribution law with different width. Non-monotone dependence of compact density on powder content is revealed in bidisperse systems. The deviations of compact density in polydisperse systems from the density of corresponding monosized system are found to be minor, less than 1 per cent.
Grain size distribution in sheared polycrystals
NASA Astrophysics Data System (ADS)
Sarkar, Tanmoy; Biswas, Santidan; Chaudhuri, Pinaki; Sain, Anirban
2017-12-01
Plastic deformation in solids induced by external stresses is of both fundamental and practical interest. Using both phase field crystal modeling and molecular dynamics simulations, we study the shear response of monocomponent polycrystalline solids. We subject mesocale polycrystalline samples to constant strain rates in a planar Couette flow geometry for studying its plastic flow, in particular its grain deformation dynamics. As opposed to equilibrium solids where grain dynamics is mainly driven by thermal diffusion, external stress/strain induce a much higher level of grain deformation activity in the form of grain rotation, coalescence, and breakage, mediated by dislocations. Despite this, the grain size distribution of this driven system shows only a weak power-law correction to its equilibrium log-normal behavior. We interpret the grain reorganization dynamics using a stochastic model.
Decorin and biglycan of normal and pathologic human corneas
NASA Technical Reports Server (NTRS)
Funderburgh, J. L.; Hevelone, N. D.; Roth, M. R.; Funderburgh, M. L.; Rodrigues, M. R.; Nirankari, V. S.; Conrad, G. W.
1998-01-01
PURPOSE: Corneas with scars and certain chronic pathologic conditions contain highly sulfated dermatan sulfate, but little is known of the core proteins that carry these atypical glycosaminoglycans. In this study the proteoglycan proteins attached to dermatan sulfate in normal and pathologic human corneas were examined to identify primary genes involved in the pathobiology of corneal scarring. METHODS: Proteoglycans from human corneas with chronic edema, bullous keratopathy, and keratoconus and from normal corneas were analyzed using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), quantitative immunoblotting, and immunohistology with peptide antibodies to decorin and biglycan. RESULTS: Proteoglycans from pathologic corneas exhibit increased size heterogeneity and binding of the cationic dye alcian blue compared with those in normal corneas. Decorin and biglycan extracted from normal and diseased corneas exhibited similar molecular size distribution patterns. In approximately half of the pathologic corneas, the level of biglycan was elevated an average of seven times above normal, and decorin was elevated approximately three times above normal. The increases were associated with highly charged molecular forms of decorin and biglycan, indicating modification of the proteins with dermatan sulfate chains of increased sulfation. Immunostaining of corneal sections showed an abnormal stromal localization of biglycan in pathologic corneas. CONCLUSIONS: The increased dermatan sulfate associated with chronic corneal pathologic conditions results from stromal accumulation of decorin and particularly of biglycan in the affected corneas. These proteins bear dermatan sulfate chains with increased sulfation compared with normal stromal proteoglycans.
Rainford, James L; Hofreiter, Michael; Mayhew, Peter J
2016-01-08
Skewed body size distributions and the high relative richness of small-bodied taxa are a fundamental property of a wide range of animal clades. The evolutionary processes responsible for generating these distributions are well described in vertebrate model systems but have yet to be explored in detail for other major terrestrial clades. In this study, we explore the macro-evolutionary patterns of body size variation across families of Hexapoda (insects and their close relatives), using recent advances in phylogenetic understanding, with an aim to investigate the link between size and diversity within this ancient and highly diverse lineage. The maximum, minimum and mean-log body lengths of hexapod families are all approximately log-normally distributed, consistent with previous studies at lower taxonomic levels, and contrasting with skewed distributions typical of vertebrate groups. After taking phylogeny and within-tip variation into account, we find no evidence for a negative relationship between diversification rate and body size, suggesting decoupling of the forces controlling these two traits. Likelihood-based modeling of the log-mean body size identifies distinct processes operating within Holometabola and Diptera compared with other hexapod groups, consistent with accelerating rates of size evolution within these clades, while as a whole, hexapod body size evolution is found to be dominated by neutral processes including significant phylogenetic conservatism. Based on our findings we suggest that the use of models derived from well-studied but atypical clades, such as vertebrates may lead to misleading conclusions when applied to other major terrestrial lineages. Our results indicate that within hexapods, and within the limits of current systematic and phylogenetic knowledge, insect diversification is generally unfettered by size-biased macro-evolutionary processes, and that these processes over large timescales tend to converge on apparently neutral evolutionary processes. We also identify limitations on available data within the clade and modeling approaches for the resolution of trees of higher taxa, the resolution of which may collectively enhance our understanding of this key component of terrestrial ecosystems.
Antweiler, Ronald C.
2015-01-01
The main classes of statistical treatments that have been used to determine if two groups of censored environmental data arise from the same distribution are substitution methods, maximum likelihood (MLE) techniques, and nonparametric methods. These treatments along with using all instrument-generated data (IN), even those less than the detection limit, were evaluated by examining 550 data sets in which the true values of the censored data were known, and therefore “true” probabilities could be calculated and used as a yardstick for comparison. It was found that technique “quality” was strongly dependent on the degree of censoring present in the groups. For low degrees of censoring (<25% in each group), the Generalized Wilcoxon (GW) technique and substitution of √2/2 times the detection limit gave overall the best results. For moderate degrees of censoring, MLE worked best, but only if the distribution could be estimated to be normal or log-normal prior to its application; otherwise, GW was a suitable alternative. For higher degrees of censoring (each group >40% censoring), no technique provided reliable estimates of the true probability. Group size did not appear to influence the quality of the result, and no technique appeared to become better or worse than other techniques relative to group size. Finally, IN appeared to do very well relative to the other techniques regardless of censoring or group size.
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Effect of Pigmentation in Particulate Formation from Fluoropolymer Thermodegradation in Microgravity
NASA Technical Reports Server (NTRS)
Srivastava, Rajiv; McKinnon, J. Thomas; Todd, Paul
1998-01-01
Fires aboard spacecraft have occurred as a result of overheated electrical wires and thermodegradation of their insulation, which is composed of fluoropolymers. The particulate products of polymer thermodegradation are only 20-50 run in diameter and are thought to play a role in "polymer fume fever". Therefore an experimental study of the particulates produced by intense ohmic heating of various fluoropolymer-insulated 20 AWG copper wire (representative of spacecraft materials) was undertaken in normal gravity and in microgravity. The 2.2 s drop facility at NASA LeRC and 1.5 s drop facility at the Colorado School of Mines were used to achieve low gravity conditions. Thermophoretic sampling was used for particulate collection. Transmission electron microscopy (TEM) and scanning transmission electron microscopy (STEM) were used to characterize the smoke particulates. It was found that the color of PTFE (Polytetrafluoroethylene) insulation has an overwhelming effect on the size, shape, morphology and, composition of the particulates. Size distributions and shape analyses using computerized image analysis showed that particle size distributions were also dependent on the pigment of the fluoropolymer insulation. The influence of pigment was observed in experiments under both normal and microgravity. Under microgravity conditions, owing to the lack of natural convective transport of particulates, much more particle aggregation was observed, and the nature of the aggregates was dependent on the color of the insulation.
Kassemi, Mohammad; Thompson, David
2016-09-01
An analytic Population Balance Equation model is used to assess the efficacy of citrate, pyrophosphate, and augmented fluid intake as dietary countermeasures aimed at reducing the risk of renal stone formation for astronauts. The model uses the measured biochemical profile of the astronauts as input and predicts the steady-state size distribution of the nucleating, growing, and agglomerating renal calculi subject to biochemical changes brought about by administration of these dietary countermeasures. Numerical predictions indicate that an increase in citrate levels beyond its average normal ground-based urinary values is beneficial but only to a limited extent. Unfortunately, results also indicate that any decline in the citrate levels during space travel below its normal urinary values on Earth can easily move the astronaut into the stone-forming risk category. Pyrophosphate is found to be an effective inhibitor since numerical predictions indicate that even at quite small urinary concentrations, it has the potential of shifting the maximum crystal aggregate size to a much smaller and plausibly safer range. Finally, our numerical results predict a decline in urinary volume below 1.5 liters/day can act as a dangerous promoter of renal stone development in microgravity while urinary volume levels of 2.5-3 liters/day can serve as effective space countermeasures. Copyright © 2016 the American Physiological Society.
Chao, Ming; Wei, Jie; Narayanasamy, Ganesh; Yuan, Yading; Lo, Yeh-Chi; Peñagarícano, José A
2018-05-01
To investigate three-dimensional cluster structure and its correlation to clinical endpoint in heterogeneous dose distributions from intensity modulated radiation therapy. Twenty-five clinical plans from twenty-one head and neck (HN) patients were used for a phenomenological study of the cluster structure formed from the dose distributions of organs at risks (OARs) close to the planning target volumes (PTVs). Initially, OAR clusters were searched to examine the pattern consistence among ten HN patients and five clinically similar plans from another HN patient. Second, clusters of the esophagus from another ten HN patients were scrutinized to correlate their sizes to radiobiological parameters. Finally, an extensive Monte Carlo (MC) procedure was implemented to gain deeper insights into the behavioral properties of the cluster formation. Clinical studies showed that OAR clusters had drastic differences despite similar PTV coverage among different patients, and the radiobiological parameters failed to positively correlate with the cluster sizes. MC study demonstrated the inverse relationship between the cluster size and the cluster connectivity, and the nonlinear changes in cluster size with dose thresholds. In addition, the clusters were insensitive to the shape of OARs. The results demonstrated that the cluster size could serve as an insightful index of normal tissue damage. The clinical outcome of the same dose-volume might be potentially different. Copyright © 2018 Elsevier B.V. All rights reserved.
Correlation between size distribution and luminescence properties of spool-shaped InAs quantum dots
NASA Astrophysics Data System (ADS)
Xie, H.; Prioli, R.; Torelly, G.; Liu, H.; Fischer, A. M.; Jakomin, R.; Mourão, R.; Kawabata, R.; Pires, M. P.; Souza, P. L.; Ponce, F. A.
2017-05-01
InAs QDs embedded in an AlGaAs matrix have been produced by MOVPE with a partial capping and annealing technique to achieve controllable QD energy levels that could be useful for solar cell applications. The resulted spool-shaped QDs are around 5 nm in height and have a log-normal diameter distribution, which is observed by TEM to range from 5 to 15 nm. Two photoluminescence peaks associated with QD emission are attributed to the ground and the first excited states transitions. The luminescence peak width is correlated with the distribution of QD diameters through the diameter dependent QD energy levels.
Stevens, Mark I; Hogendoorn, Katja; Schwarz, Michael P
2007-08-29
The Central Limit Theorem (CLT) is a statistical principle that states that as the number of repeated samples from any population increase, the variance among sample means will decrease and means will become more normally distributed. It has been conjectured that the CLT has the potential to provide benefits for group living in some animals via greater predictability in food acquisition, if the number of foraging bouts increases with group size. The potential existence of benefits for group living derived from a purely statistical principle is highly intriguing and it has implications for the origins of sociality. Here we show that in a social allodapine bee the relationship between cumulative food acquisition (measured as total brood weight) and colony size accords with the CLT. We show that deviations from expected food income decrease with group size, and that brood weights become more normally distributed both over time and with increasing colony size, as predicted by the CLT. Larger colonies are better able to match egg production to expected food intake, and better able to avoid costs associated with producing more brood than can be reared while reducing the risk of under-exploiting the food resources that may be available. These benefits to group living derive from a purely statistical principle, rather than from ecological, ergonomic or genetic factors, and could apply to a wide variety of species. This in turn suggests that the CLT may provide benefits at the early evolutionary stages of sociality and that evolution of group size could result from selection on variances in reproductive fitness. In addition, they may help explain why sociality has evolved in some groups and not others.
Soglia, Francesca; Gao, Jingxian; Mazzoni, Maurizio; Puolanne, Eero; Cavani, Claudio; Petracci, Massimiliano; Ertbjerg, Per
2017-09-01
Recently the poultry industry faced an emerging muscle abnormality termed wooden breast (WB), the prevalence of which has dramatically increased in the past few years. Considering the incomplete knowledge concerning this condition and the lack of information on possible variations due to the intra-fillet sampling locations (superficial vs. deep position) and aging of the samples, this study aimed at investigating the effect of 7-d storage of broiler breast muscles on histology, texture, and particle size distribution, evaluating whether the sampling position exerts a relevant role in determining the main features of WB. With regard to the histological observations, severe myodegeneration accompanied by accumulation of connective tissue was observed within the WB cases, irrespective of the intra-fillet sampling position. No changes in the histological traits took place during the aging in either the normal or the WB samples. As to textural traits, although a progressive tenderization process took place during storage (P ≤ 0.001), the differences among the groups were mainly detected when raw meat rather than cooked was analyzed, with the WB samples exhibiting the highest (P ≤ 0.001) 80% compression values. In spite of the increased amount of connective tissue components in the WB cases, their thermally labile cross-links will account for the similar compression and shear-force values as normal breast cases when measured on cooked samples. Similarly, the enlargement of extracellular matrix and fibrosis might contribute in explaining the different fragmentation patterns observed between the superficial and the deep layer in the WB samples, with the superficial part exhibiting a higher amount of larger particles and an increase in particles with larger size during storage, compared to normal breasts. © 2017 Poultry Science Association Inc.
Experimental study of microbubble drag reduction on an axisymmetric body
NASA Astrophysics Data System (ADS)
Song, Wuchao; Wang, Cong; Wei, Yingjie; Zhang, Xiaoshi; Wang, Wei
2018-01-01
Microbubble drag reduction on the axisymmetric body is experimentally investigated in the turbulent water tunnel. Microbubbles are created by injecting compressed air through the porous medium with various average pore sizes. The morphology of microbubble flow and the size distribution of microbubble are observed by the high-speed visualization system. Drag measurements are obtained by the balance which is presented as the function of void ratio. The results show that when the air injection flow rate is high, uniformly dispersed microbubble flow is coalesced into an air layer with the larger increment rate of drag reduction ratio. The diameter distributions of microbubble under various conditions are submitted to normal distribution. Microbubble drag reduction can be divided into three distinguishable regions in which the drag reduction ratio experiences increase stage, rapid increase stage and stability stage, respectively, corresponding to the various morphologies of microbubble flow. Moreover, drag reduction ratio increases with the decreasing pore sizes of porous medium at the identical void ratio in the area of low speeds, while the effect of pore sizes on drag reduction is reduced gradually until it disappears with the increasing free stream speeds, which indicates that smaller microbubbles have better efficiency in drag reduction. This research results help to improve the understanding of microbubble drag reduction and provides helpful references for practical applications.
Determining size-specific emission factors for environmental tobacco smoke particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klepeis, Neil E.; Apte, Michael G.; Gundel, Lara A.
Because size is a major controlling factor for indoor airborne particle behavior, human particle exposure assessments will benefit from improved knowledge of size-specific particle emissions. We report a method of inferring size-specific mass emission factors for indoor sources that makes use of an indoor aerosol dynamics model, measured particle concentration time series data, and an optimization routine. This approach provides--in addition to estimates of the emissions size distribution and integrated emission factors--estimates of deposition rate, an enhanced understanding of particle dynamics, and information about model performance. We applied the method to size-specific environmental tobacco smoke (ETS) particle concentrations measured everymore » minute with an 8-channel optical particle counter (PMS-LASAIR; 0.1-2+ micrometer diameters) and every 10 or 30 min with a 34-channel differential mobility particle sizer (TSI-DMPS; 0.01-1+ micrometer diameters) after a single cigarette or cigar was machine-smoked inside a low air-exchange-rate 20 m{sup 3} chamber. The aerosol dynamics model provided good fits to observed concentrations when using optimized values of mass emission rate and deposition rate for each particle size range as input. Small discrepancies observed in the first 1-2 hours after smoking are likely due to the effect of particle evaporation, a process neglected by the model. Size-specific ETS particle emission factors were fit with log-normal distributions, yielding an average mass median diameter of 0.2 micrometers and an average geometric standard deviation of 2.3 with no systematic differences between cigars and cigarettes. The equivalent total particle emission rate, obtained integrating each size distribution, was 0.2-0.7 mg/min for cigars and 0.7-0.9 mg/min for cigarettes.« less
NASA Astrophysics Data System (ADS)
Yamashita, S.; Nakajo, T.; Naruse, H.
2009-12-01
In this study, we statistically classified the grain size distribution of the bottom surface sediment on a microtidal sand flat to analyze the depositional processes of the sediment. Multiple classification analysis revealed that two types of sediment populations exist in the bottom surface sediment. Then, we employed the sediment trend model developed by Gao and Collins (1992) for the estimation of sediment transport pathways. As a result, we found that statistical discrimination of the bottom surface sediment provides useful information for the sediment trend model while dealing with various types of sediment transport processes. The microtidal sand flat along the Kushida River estuary, Ise Bay, central Japan, was investigated, and 102 bottom surface sediment samples were obtained. Then, their grain size distribution patterns were measured by the settling tube method, and each grain size distribution parameter (mud and gravel contents, mean grain size, coefficient of variance (CV), skewness, kurtosis, 5, 25, 50, 75, and 95 percentile) was calculated. Here, CV is the normalized sorting value divided by the mean grain size. Two classical statistical methods—principal component analysis (PCA) and fuzzy cluster analysis—were applied. The results of PCA showed that the bottom surface sediment of the study area is mainly characterized by grain size (mean grain size and 5-95 percentile) and the CV value, indicating predominantly large absolute values of factor loadings in primal component (PC) 1. PC1 is interpreted as being indicative of the grain-size trend, in which a finer grain-size distribution indicates better size sorting. The frequency distribution of PC1 has a bimodal shape and suggests the existence of two types of sediment populations. Therefore, we applied fuzzy cluster analysis, the results of which revealed two groupings of the sediment (Cluster 1 and Cluster 2). Cluster 1 shows a lower value of PC1, indicating coarse and poorly sorted sediments. Cluster 1 sediments are distributed around the branched channel from Kushida River and show an expanding distribution from the river mouth toward the northeast direction. Cluster 2 shows a higher value of PC1, indicating fine and well-sorted sediments; this cluster is distributed in a distant area from the river mouth, including the offshore region. Therefore, Cluster 1 and Cluster 2 are interpreted as being deposited by fluvial and wave processes, respectively. Finally, on the basis of this distribution pattern, the sediment trend model was applied in areas dominated separately by fluvial and wave processes. Resultant sediment transport patterns showed good agreement with those obtained by field observations. The results of this study provide an important insight into the numerical models of sediment transport.
Dose coefficients in pediatric and adult abdominopelvic CT based on 100 patient models.
Tian, Xiaoyu; Li, Xiang; Segars, W Paul; Frush, Donald P; Paulson, Erik K; Samei, Ehsan
2013-12-21
Recent studies have shown the feasibility of estimating patient dose from a CT exam using CTDI(vol)-normalized-organ dose (denoted as h), DLP-normalized-effective dose (denoted as k), and DLP-normalized-risk index (denoted as q). However, previous studies were limited to a small number of phantom models. The purpose of this work was to provide dose coefficients (h, k, and q) across a large number of computational models covering a broad range of patient anatomy, age, size percentile, and gender. The study consisted of 100 patient computer models (age range, 0 to 78 y.o.; weight range, 2-180 kg) including 42 pediatric models (age range, 0 to 16 y.o.; weight range, 2-80 kg) and 58 adult models (age range, 18 to 78 y.o.; weight range, 57-180 kg). Multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare) were included. A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which h, k, and q were derived. The relationships between h, k, and q and patient characteristics (size, age, and gender) were ascertained. The differences in conversion coefficients across the scanners were further characterized. CTDI(vol)-normalized-organ dose (h) showed an exponential decrease with increasing patient size. For organs within the image coverage, the average differences of h across scanners were less than 15%. That value increased to 29% for organs on the periphery or outside the image coverage, and to 8% for distributed organs, respectively. The DLP-normalized-effective dose (k) decreased exponentially with increasing patient size. For a given gender, the DLP-normalized-risk index (q) showed an exponential decrease with both increasing patient size and patient age. The average differences in k and q across scanners were 8% and 10%, respectively. This study demonstrated that the knowledge of patient information and CTDIvol/DLP values may be used to estimate organ dose, effective dose, and risk index in abdominopelvic CT based on the coefficients derived from a large population of pediatric and adult patients.
Dose coefficients in pediatric and adult abdominopelvic CT based on 100 patient models
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Frush, Donald P.; Paulson, Erik K.; Samei, Ehsan
2013-12-01
Recent studies have shown the feasibility of estimating patient dose from a CT exam using CTDIvol-normalized-organ dose (denoted as h), DLP-normalized-effective dose (denoted as k), and DLP-normalized-risk index (denoted as q). However, previous studies were limited to a small number of phantom models. The purpose of this work was to provide dose coefficients (h, k, and q) across a large number of computational models covering a broad range of patient anatomy, age, size percentile, and gender. The study consisted of 100 patient computer models (age range, 0 to 78 y.o.; weight range, 2-180 kg) including 42 pediatric models (age range, 0 to 16 y.o.; weight range, 2-80 kg) and 58 adult models (age range, 18 to 78 y.o.; weight range, 57-180 kg). Multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare) were included. A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which h, k, and q were derived. The relationships between h, k, and q and patient characteristics (size, age, and gender) were ascertained. The differences in conversion coefficients across the scanners were further characterized. CTDIvol-normalized-organ dose (h) showed an exponential decrease with increasing patient size. For organs within the image coverage, the average differences of h across scanners were less than 15%. That value increased to 29% for organs on the periphery or outside the image coverage, and to 8% for distributed organs, respectively. The DLP-normalized-effective dose (k) decreased exponentially with increasing patient size. For a given gender, the DLP-normalized-risk index (q) showed an exponential decrease with both increasing patient size and patient age. The average differences in k and q across scanners were 8% and 10%, respectively. This study demonstrated that the knowledge of patient information and CTDIvol/DLP values may be used to estimate organ dose, effective dose, and risk index in abdominopelvic CT based on the coefficients derived from a large population of pediatric and adult patients.
Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.
Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I
2007-02-01
We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity scales is found for the product alpha(tau)h(tau)*. These results indicate that compact globular proteins are consistent with a thermodynamic system governed by hydrophobic-like energy functions, with reduced distances from the geometrical center, reflecting atomic burials, and provide a conceptual framework for the eventual prediction from sequence of a few parameters from which whole atomic probability distributions and potentials of mean force can be reconstructed. Copyright 2006 Wiley-Liss, Inc.
O'Malley, Lauren; Pignol, Jean-Philippe; Beachey, David J; Keller, Brian M; Presutti, Joseph; Sharpe, Michael
2006-05-21
Using efficient immobilization and dedicated beam collimation devices, stereotactic radiosurgery ensures highly conformal treatment of small tumours with limited microscopic extension. One contribution to normal tissue irradiation remains the radiological penumbra. This work aims at demonstrating that intermediate energy photons (IEP), above orthovoltage but below megavoltage, improve dose distribution for stereotactic radiosurgery for small irradiation field sizes due to a dramatic reduction of radiological penumbra. Two different simulation systems were used: (i) Monte Carlo simulation to investigate the dose distribution of monoenergetic IEP between 100 keV and 1 MeV in water phantom; (ii) the Pinnacle3 TPS including a virtual IEP unit to investigate the dosimetry benefit of treating with 11 non-coplanar beams a 2 cm tumour in the middle of a brain adjacent to a 1 mm critical structure. Radiological penumbrae below 300 microm are generated for field size below 2 x 2 cm2 using monoenergetic IEP beams between 200 and 400 keV. An 800 kV beam generated in a 0.5 mm tungsten target maximizes the photon intensity in this range. Pinnacle3 confirms the dramatic reduction in penumbra size. DVHs show for a constant dose distribution conformality, improved dose distribution homogeneity and better sparing of critical structures using a 800 kV beam compared to a 6 MV beam.
NASA Astrophysics Data System (ADS)
O'Malley, Lauren; Pignol, Jean-Philippe; Beachey, David J.; Keller, Brian M.; Presutti, Joseph; Sharpe, Michael
2006-05-01
Using efficient immobilization and dedicated beam collimation devices, stereotactic radiosurgery ensures highly conformal treatment of small tumours with limited microscopic extension. One contribution to normal tissue irradiation remains the radiological penumbra. This work aims at demonstrating that intermediate energy photons (IEP), above orthovoltage but below megavoltage, improve dose distribution for stereotactic radiosurgery for small irradiation field sizes due to a dramatic reduction of radiological penumbra. Two different simulation systems were used: (i) Monte Carlo simulation to investigate the dose distribution of monoenergetic IEP between 100 keV and 1 MeV in water phantom; (ii) the Pinnacle3 TPS including a virtual IEP unit to investigate the dosimetry benefit of treating with 11 non-coplanar beams a 2 cm tumour in the middle of a brain adjacent to a 1 mm critical structure. Radiological penumbrae below 300 µm are generated for field size below 2 × 2 cm2 using monoenergetic IEP beams between 200 and 400 keV. An 800 kV beam generated in a 0.5 mm tungsten target maximizes the photon intensity in this range. Pinnacle3 confirms the dramatic reduction in penumbra size. DVHs show for a constant dose distribution conformality, improved dose distribution homogeneity and better sparing of critical structures using a 800 kV beam compared to a 6 MV beam.
NASA Astrophysics Data System (ADS)
Onnela, Jukka-Pekka; Töyli, Juuso; Kaski, Kimmo
2009-02-01
Tick size is an important aspect of the micro-structural level organization of financial markets. It is the smallest institutionally allowed price increment, has a direct bearing on the bid-ask spread, influences the strategy of trading order placement in electronic markets, affects the price formation mechanism, and appears to be related to the long-term memory of volatility clustering. In this paper we investigate the impact of tick size on stock returns. We start with a simple simulation to demonstrate how continuous returns become distorted after confining the price to a discrete grid governed by the tick size. We then move on to a novel experimental set-up that combines decimalization pilot programs and cross-listed stocks in New York and Toronto. This allows us to observe a set of stocks traded simultaneously under two different ticks while holding all security-specific characteristics fixed. We then study the normality of the return distributions and carry out fits to the chosen distribution models. Our empirical findings are somewhat mixed and in some cases appear to challenge the simulation results.
Bullying and Cyberbullying among Deaf Students and Their Hearing Peers: An Exploratory Study
ERIC Educational Resources Information Center
Bauman, Sheri; Pero, Heather
2011-01-01
A questionnaire on bullying and cyberbullying was administered to 30 secondary students (Grades 7-12) in a charter school for the Deaf and hard of hearing and a matched group of 22 hearing students in a charter secondary school on the same campus. Because the sample size was small and distributions non-normal, results are primarily descriptive and…
Scattering from Rock and Rock Outcrops
2014-09-30
orientations and size distributions reflect the internal fault organization of the bedrock. The plot in Fig. 3 displays experimentally determined PFA...mechanisms contributing could be scattering from small scale roughness combined with specular scattering from facets oriented close to normal incidence to...Larvik, Norway made with a stereo photogrammetry system. 7 IMPACT/APPLICATIONS The primary work completed over the course of this project
Aspects of droplet and particle size control in miniemulsions
NASA Astrophysics Data System (ADS)
Saygi-Arslan, Oznur
Miniemulsion polymerization has become increasingly popular among researchers since it can provide significant advantages over conventional emulsion polymerization in certain cases, such as production of high-solids, low-viscosity latexes with better stability and polymerization of highly water-insoluble monomers. Miniemulsions are relatively stable oil (e.g., monomer) droplets, which can range in size from 50 to 500 nm, and are normally dispersed in an aqueous phase with the aid of a surfactant and a costabilizer. These droplets are the primary locus of the initiation of the polymerization reaction. Since particle formation takes place in the monomer droplets, theoretically, in miniemulsion systems the final particle size can be controlled by the initial droplet size. The miniemulsion preparation process typically generates broad droplet size distributions and there is no complete treatment in the literature regarding the control of the mean droplet size or size distribution. This research aims to control the miniemulsion droplet size and its distribution. In situ emulsification, where the surfactant is synthesized spontaneously at the oil/water interface, has been put forth as a simpler method for the preparation of miniemulsions-like systems. Using the in situ method of preparation, emulsion stability and droplet and particle sizes were monitored and compared with conventional emulsions and miniemulsions. Styrene emulsions prepared by the in situ method do not demonstrate the stability of a comparable miniemulsion. Upon polymerization, the final particle size generated from the in situ emulsion did not differ significantly from the comparable conventional emulsion polymerization; the reaction mechanism for in situ emulsions is more like conventional emulsion polymerization rather than miniemulsion polymerization. Similar results were found when the in situ method was applied to controlled free radical polymerizations (CFRP), which have been advanced as a potential application of the method. Molecular weight control was found to be achieved via diffusion of the CFRP agents through the aqueous phase owing to limited water solubilities. The effects of adsorption rate and energy on the droplet size and size distribution of miniemulsions using different surfactants (sodium lauryl sulfate (SLS), sodium dodecylbenzene sulfonate (SDBS), Dowfax 2A1, Aerosol OT-75PG, sodium n-octyl sulfate (SOS), and sodium n-hexadecyl sulfate (SHS)) were analyzed. For this purpose, first, the dynamics of surfactant adsorption at an oil/water interface were examined over a range of surfactant concentrations by the drop volume method and then adsorption rates of the different surfactants were determined for the early stages of adsorption. The results do not show a direct relationship between adsorption rate and miniemulsion droplet size and size distribution. Adsorption energies of these surfactants were also calculated by the Langmuir adsorption isotherm equation and no correlation between adsorption energy and miniemulsion droplet size was found. In order to understand the mechanism of miniemulsification process, the effects of breakage and coalescence processes on droplet size distributions were observed at different surfactant concentrations, monomer ratios, and homogenization conditions. A coalescence and breakup mechanism for miniemulsification is proposed to explain the size distribution of droplets. The multimodal droplet size distribution of ODMA miniemulsions was controlled by the breakage mechanism. The results also showed that, at a surfactant concentration when 100% surface coverage was obtained, the droplet size distribution became unimodal.
Experimental investigation of cephapirin adsorption to quartz filter sands and dune sands
NASA Astrophysics Data System (ADS)
Peterson, Jonathan W.; O'Meara, Theresa A.; Seymour, Michael D.
2008-08-01
Batch experiments were performed to investigate cephapirin (a widely used veterinary antibiotic) adsorption on various size sands of low total organic carbon content (0.08-0.36 wt%). In the aqueous concentration range investigated (11-112 μmol/L cephapirin), adsorption to nearly pure quartz filter sands (0.50-3.35 mm diameter) is low. Isotherms are S-shaped and most display a region of minimum adsorption, where decreased adsorption occurs with increasing solution concentration, followed by increased adsorption at higher concentrations. Cephapirin adsorption to quartz-rich, feldspar-bearing dune sands (0.06-0.35 mm diameter), and the smallest quartz filter sand investigated (0.43-0.50 mm), can be described by linear sorption isotherms over the range of concentrations investigated. Distribution coefficients ( K d) range from 0.94 to 3.45 L/kg. No systematic relationship exists between grain size and amount of adsorption for any of the sands investigated. Cephapirin adsorption is positively correlated to the feldspar ratio (K-feldspar/(albite + Ca-plagioclase). Feldspar-ratio normalization of distribution coefficients was more effective than organic carbon normalization at reducing variability of K d values in the dune sands investigated.
Apparent Anomalous Diffusion in the Cytoplasm of Human Cells: The Effect of Probes' Polydispersity.
Kalwarczyk, Tomasz; Kwapiszewska, Karina; Szczepanski, Krzysztof; Sozanski, Krzysztof; Szymanski, Jedrzej; Michalska, Bernadeta; Patalas-Krawczyk, Paulina; Duszynski, Jerzy; Holyst, Robert
2017-10-26
This work, based on in vivo and in vitro measurements, as well as in silico simulations, provides a consistent analysis of diffusion of polydisperse nanoparticles in the cytoplasm of living cells. Using the example of fluorescence correlation spectroscopy (FCS), we show the effect of polydispersity of probes on the experimental results. Although individual probes undergo normal diffusion, in the ensemble of probes, an effective broadening of the distribution of diffusion times occurs-similar to anomalous diffusion. We introduced fluorescently labeled dextrans into the cytoplasm of HeLa cells and found that cytoplasmic hydrodynamic drag, exponentially dependent on probe size, extraordinarily broadens the distribution of diffusion times across the focal volume. As a result, the in vivo FCS data were effectively fitted with the anomalous subdiffusion model while for a monodisperse probe the normal diffusion model was most suitable. Diffusion time obtained from the anomalous diffusion model corresponds to a probe whose size is determined by the weight-average molecular weight of the polymer. The apparent anomaly exponent decreases with increasing polydispersity of the probes. Our results and methodology can be applied in intracellular studies of the mobility of nanoparticles, polymers, or oligomerizing proteins.
Characteristics of Landslide Size Distribution in Response to Different Rainfall Scenarios
NASA Astrophysics Data System (ADS)
Wu, Y.; Lan, H.; Li, L.
2017-12-01
There have long been controversies on the characteristics of landslide size distribution in response to different rainfall scenarios. For inspecting the characteristics, we have collected a large amount of data, including shallow landslide inventory with landslide areas and landslide occurrence times recorded, and a longtime daily rainfall series fully covering all the landslide occurrences. Three indexes were adopted to quantitatively describe the characteristics of landslide-related rainfall events, which are rainfall duration, rainfall intensity, and the number of rainy days. The first index, rainfall duration, is derived from the exceptional character of a landslide-related rainfall event, which can be explained in terms of the recurrence interval or return period, according to the extreme value theory. The second index, rainfall intensity, is the average rainfall in this duration. The third index is the number of rainy days in this duration. These three indexes were normalized using the standard score method to ensure that they are in the same order of magnitude. Based on these three indexes, landslide-related rainfall events were categorized by a k-means method into four scenarios: moderate rainfall, storm, long-duration rainfall, and long-duration intermittent rainfall. Then, landslides were in turn categorized into four groups according to the scenarios of rainfall events related to them. Inverse-gamma distribution was applied to characterize the area distributions of the four different landslide groups. A tail index and a rollover of the landslide size distribution can be obtained according to the parameters of the distribution. Characteristics of landslide size distribution show that the rollovers of the size distributions of landslides related to storm and long-duration rainfall are larger than those of landslides in the other two groups. It may indicate that the location of rollover may shift right with the increase of rainfall intensity and the extension of rainfall duration. In addition, higher rainfall intensities are prone to trigger larger rainfall-induced landslides since the tail index of landslide area distribution are smaller for higher rainfall intensities, which indicate higher probabilities of large landslides.
NASA Astrophysics Data System (ADS)
Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode
2018-06-01
The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.
NASA Astrophysics Data System (ADS)
Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode
2018-04-01
The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.
Christians, S; Schluender, S; van Treel, N D; Behr-Gross, M-E
2016-01-01
Molecular-size distribution by size-exclusion chromatography (SEC) [1] is used for the quantification of unwanted aggregated forms in therapeutic polyclonal antibodies, referred to as human immunoglobulins (Ig) in the European Pharmacopoeia. Considering not only the requirements of the monographs for human normal Ig (0338, 0918 and 2788) [2-4], but also the general chapter on chromatographic techniques (2.2.46) [5], several chromatographic column types are allowed for performing this test. Although the EDQM knowledge database gives only 2 examples of suitable columns as a guide for the user, these monographs permit the use of columns with different lengths and diameters, and do not prescribe either particle size or pore size, which are considered key characteristics of SEC columns. Therefore, the columns used may differ significantly from each other with regard to peak resolution, potentially resulting in ambiguous peak identity assignment. In some cases, this may even lead to situations where the manufacturer and the Official Medicines Control Laboratory (OMCL) in charge of Official Control Authority Batch Release (OCABR) have differing molecular-size distribution profiles for aggregates of the same batch of Ig, even though both laboratories follow the requirements of the relevant monograph. In the present study, several formally acceptable columns and the peak integration results obtained therewith were compared. A standard size-exclusion column with a length of 60 cm and a particle size of 10 µm typically detects only 3 Ig fractions, namely monomers, dimers and polymers. This column type was among the first reliable HPLC columns on the market for this test and very rapidly became the standard for many pharmaceutical manufacturers and OMCLs for batch release testing. Consequently, the distribution of monomers, dimers and polymers was established as the basis for the interpretation of the results of the molecular-size distribution test in the relevant monographs. However, modern columns with a smaller particle size provide better resolution and also reveal a class of components designated here as oligomers. This publication addresses the interpretation of the SEC test for Ig with respect to the following questions: - how can molecular-size distribution tests benefit from the use of the most recent column technology without changing the sense of well-established quality parameters? - is it possible to mathematically define a way to interpret chromatograms generated with various column types with the same fractionation range but different resolution power? - how should oligomers be considered regarding compliance with compendial specifications?
Yoneyama, Takeshi; Watanabe, Tetsuyo; Kagawa, Hiroyuki; Hayashi, Yutaka; Nakada, Mitsutoshi
2017-03-01
In photodynamic diagnosis using 5-aminolevulinic acid (5-ALA), discrimination between the tumor and normal tissue is very important for a precise resection. However, it is difficult to distinguish between infiltrating tumor and normal regions in the boundary area. In this study, fluorescent intensity and bright spot analyses using a confocal microscope is proposed for the precise discrimination between infiltrating tumor and normal regions. From the 5-ALA-resected brain tumor tissue, the red fluorescent and marginal regions were sliced for observation under a confocal microscope. Hematoxylin and eosin (H&E) staining were performed on serial slices of the same tissue. According to the pathological inspection of the H&E slides, the tumor and infiltrating and normal regions on confocal microscopy images were investigated. From the fluorescent intensity of the image pixels, a histogram of pixel number with the same fluorescent intensity was obtained. The fluorescent bright spot sizes and total number were compared between the marginal and normal regions. The fluorescence intensity distribution and average intensity in the tumor were different from those in the normal region. The probability of a difference from the dark enhanced the difference between the tumor and the normal region. The bright spot size and number in the infiltrating tumor were different from those in the normal region. Fluorescence intensity analysis is useful to distinguish a tumor region, and a bright spot analysis is useful to distinguish between infiltrating tumor and normal regions. These methods will be important for the precise resection or photodynamic therapy of brain tumors. Copyright © 2016 Elsevier B.V. All rights reserved.
On the scaling of the distribution of daily price fluctuations in the Mexican financial market index
NASA Astrophysics Data System (ADS)
Alfonso, Léster; Mansilla, Ricardo; Terrero-Escalante, César A.
2012-05-01
In this paper, a statistical analysis of log-return fluctuations of the IPC, the Mexican Stock Market Index is presented. A sample of daily data covering the period from 04/09/2000-04/09/2010 was analyzed, and fitted to different distributions. Tests of the goodness of fit were performed in order to quantitatively asses the quality of the estimation. Special attention was paid to the impact of the size of the sample on the estimated decay of the distributions tail. In this study a forceful rejection of normality was obtained. On the other hand, the null hypothesis that the log-fluctuations are fitted to a α-stable Lévy distribution cannot be rejected at the 5% significance level.
X-ray clusters from a high-resolution hydrodynamic PPM simulation of the cold dark matter universe
NASA Technical Reports Server (NTRS)
Bryan, Greg L.; Cen, Renyue; Norman, Michael L.; Ostriker, Jermemiah P.; Stone, James M.
1994-01-01
A new three-dimensional hydrodynamic code based on the piecewise parabolic method (PPM) is utilized to compute the distribution of hot gas in the standard Cosmic Background Explorer (COBE)-normalized cold dark matter (CDM) universe. Utilizing periodic boundary conditions, a box with size 85 h(exp-1) Mpc, having cell size 0.31 h(exp-1) Mpc, is followed in a simulation with 270(exp 3)=10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, Sigma(sub 8)=1.05, Omega(sub b)=0.06, we find the X-ray-emitting clusters, compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. The results, which are compared with those obtained in the preceding paper (Kang et al. 1994a), may be used in conjuction with ROSAT and other observational data sets. Overall, the results of the two computations are qualitatively very similar with regard to the trends of cluster properties, i.e., how the number density, radius, and temeprature depend on luminosity and redshift. The total luminosity from clusters is approximately a factor of 2 higher using the PPM code (as compared to the 'total variation diminishing' (TVD) code used in the previous paper) with the number of bright clusters higher by a similar factor. The primary conclusions of the prior paper, with regard to the power spectrum of the primeval density perturbations, are strengthened: the standard CDM model, normalized to the COBE microwave detection, predicts too many bright X-ray emitting clusters, by a factor probably in excess of 5. The comparison between observations and theoretical predictions for the evolution of cluster properties, luminosity functions, and size and temperature distributions should provide an important discriminator among competing scenarios for the development of structure in the universe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holesinger, T. G.; Carpenter, J. S.; Lienert, T. J.
The ability of additive manufacturing to directly fabricate complex shapes provides characterization challenges for part qualification. The orientation of the microstructures produced by these processes will change relative to the surface normal of a complex part. In this work, the microscopy and x-ray tomography of an AlSi10Mg alloy hemispherical shell fabricated using powder bed metal additive manufacturing are used to illustrate some of these challenges. The shell was manufactured using an EOS M280 system in combination with EOS-specified powder and process parameters. The layer-by-layer process of building the shell with the powder bed additive manufacturing approach results in a position-dependentmore » microstructure that continuously changes its orientation relative to the shell surface normal. X-ray tomography was utilized to examine the position-dependent size and distribution of porosity and surface roughness in the 98.6% dense part. Optical and electron microscopy were used to identify global and local position-dependent structures, grain morphologies, chemistry, and precipitate sizes and distributions. The rapid solidification processes within the fusion zone (FZ) after the laser transit results in a small dendrite size. Cell spacings taken from the structure in the middle of the FZ were used with published relationships to estimate a cooling rate of ~9 × 10 5 K/s. Uniformly-distributed, nanoscale Si precipitates were found within the primary α-Al grains. A thin, distinct boundary layer containing larger α-Al grains and extended regions of the nanocrystalline divorced eutectic material surrounds the FZ. Moreover, subtle differences in the composition between the latter layer and the interior of the FZ were noted with scanning transmission electron microscopy (STEM) spectral imaging.« less
Characterization of an aluminum alloy hemispherical shell fabricated via direct metal laser melting
Holesinger, T. G.; Carpenter, J. S.; Lienert, T. J.; ...
2016-01-11
The ability of additive manufacturing to directly fabricate complex shapes provides characterization challenges for part qualification. The orientation of the microstructures produced by these processes will change relative to the surface normal of a complex part. In this work, the microscopy and x-ray tomography of an AlSi10Mg alloy hemispherical shell fabricated using powder bed metal additive manufacturing are used to illustrate some of these challenges. The shell was manufactured using an EOS M280 system in combination with EOS-specified powder and process parameters. The layer-by-layer process of building the shell with the powder bed additive manufacturing approach results in a position-dependentmore » microstructure that continuously changes its orientation relative to the shell surface normal. X-ray tomography was utilized to examine the position-dependent size and distribution of porosity and surface roughness in the 98.6% dense part. Optical and electron microscopy were used to identify global and local position-dependent structures, grain morphologies, chemistry, and precipitate sizes and distributions. The rapid solidification processes within the fusion zone (FZ) after the laser transit results in a small dendrite size. Cell spacings taken from the structure in the middle of the FZ were used with published relationships to estimate a cooling rate of ~9 × 10 5 K/s. Uniformly-distributed, nanoscale Si precipitates were found within the primary α-Al grains. A thin, distinct boundary layer containing larger α-Al grains and extended regions of the nanocrystalline divorced eutectic material surrounds the FZ. Moreover, subtle differences in the composition between the latter layer and the interior of the FZ were noted with scanning transmission electron microscopy (STEM) spectral imaging.« less
Characterization of an Aluminum Alloy Hemispherical Shell Fabricated via Direct Metal Laser Melting
NASA Astrophysics Data System (ADS)
Holesinger, T. G.; Carpenter, J. S.; Lienert, T. J.; Patterson, B. M.; Papin, P. A.; Swenson, H.; Cordes, N. L.
2016-03-01
The ability of additive manufacturing to directly fabricate complex shapes provides characterization challenges for part qualification. The orientation of the microstructures produced by these processes will change relative to the surface normal of a complex part. In this work, the microscopy and x-ray tomography of an AlSi10Mg alloy hemispherical shell fabricated using powder bed metal additive manufacturing are used to illustrate some of these challenges. The shell was manufactured using an EOS M280 system in combination with EOS-specified powder and process parameters. The layer-by-layer process of building the shell with the powder bed additive manufacturing approach results in a position-dependent microstructure that continuously changes its orientation relative to the shell surface normal. X-ray tomography was utilized to examine the position-dependent size and distribution of porosity and surface roughness in the 98.6% dense part. Optical and electron microscopy were used to identify global and local position-dependent structures, grain morphologies, chemistry, and precipitate sizes and distributions. The rapid solidification processes within the fusion zone (FZ) after the laser transit results in a small dendrite size. Cell spacings taken from the structure in the middle of the FZ were used with published relationships to estimate a cooling rate of ~9 × 105 K/s. Uniformly-distributed, nanoscale Si precipitates were found within the primary α-Al grains. A thin, distinct boundary layer containing larger α-Al grains and extended regions of the nanocrystalline divorced eutectic material surrounds the FZ. Subtle differences in the composition between the latter layer and the interior of the FZ were noted with scanning transmission electron microscopy (STEM) spectral imaging.
Pore-scale modeling of saturated permeabilities in random sphere packings.
Pan, C; Hilpert, M; Miller, C T
2001-12-01
We use two pore-scale approaches, lattice-Boltzmann (LB) and pore-network modeling, to simulate single-phase flow in simulated sphere packings that vary in porosity and sphere-size distribution. For both modeling approaches, we determine the size of the representative elementary volume with respect to the permeability. Permeabilities obtained by LB modeling agree well with Rumpf and Gupte's experiments in sphere packings for small Reynolds numbers. The LB simulations agree well with the empirical Ergun equation for intermediate but not for small Reynolds numbers. We suggest a modified form of Ergun's equation to describe both low and intermediate Reynolds number flows. The pore-network simulations agree well with predictions from the effective-medium approximation but underestimate the permeability due to the simplified representation of the porous media. Based on LB simulations in packings with log-normal sphere-size distributions, we suggest a permeability relation with respect to the porosity, as well as the mean and standard deviation of the sphere diameter.
Quantal basis of vesicle growth and information content, a unified approach.
Nitzany, Eyal; Hammel, Ilan; Meilijson, Isaac
2010-09-07
Secretory vesicles express a periodic multimodal size distribution. The successive modes are integral multiples of the smallest mode (G(1)). The vesicle content ranges from macromolecules (proteins, mucopolysaccharides and hormones) to low molecular weight molecules (neurotransmitters). A steady-state model has been developed to emulate a mechanism for the introduction of vesicles of monomer size, which grow by a unit addition mechanism, G(1)+G(n)-->G(n+1) which, at a later stage are eliminated from the system. We describe a model of growth and elimination transition rates which adequately illustrates the distributions of vesicle population size at steady-state and upon elimination. Consequently, prediction of normal behavior and pathological perturbations is feasible. Careful analysis of spontaneous secretion, as compared to short burst-induced secretion, suggests that the basic character-code for reliable communication should be within a range of only 8-10 vesicles' burst which may serve as a yes/no message. Copyright 2010 Elsevier Ltd. All rights reserved.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
Zhang, Wei; Peng, Peng; Kuang, Yun; Yang, Jiaxin; Cao, Dongyan; You, Yan; Shen, Keng
2016-03-01
Cellular exosomes are involved in many disease processes and have the potential to be used for diagnosis and treatment. In this study, we compared the characteristics of exosomes derived from human ovarian epithelial cells (HOSEPiC) and three epithelial ovarian cancer cell lines (OVCAR3, IGROV1, and ES-2) to investigate the differences between exosomes originating from normal and malignant cells. Two established colloid-chemical methodologies, electron microscopy (EM) and dynamic light scattering (DLS), and a relatively new method, nanoparticle tracking analysis (NTA), were used to measure the size and size distribution of exosomes. The concentration and epithelial cellular adhesion molecule (EpCAM) expression of exosomes were measured by NTA. Quantum dots were conjugated with anti-EpCAM to label exosomes, and the labeled exosomes were detected by NTA in fluorescent mode. The normal-cell-derived exosomes were significantly larger than those derived from malignant cells, and exosomes were successfully labeled using anti-EpCAM-conjugated quantum dots. Exosomes from different cell lines may vary in size, and exosomes might be considered as potential diagnosis biomarkers. NTA can be considered a useful, efficient, and objective method for the study of different exosomes and their unique properties in ovarian cancer.
McCorkle, Ryan; Thomas, Brittany; Suffaletto, Heidi; Jehle, Dietrich
2010-11-01
To establish normative parameters of the spleen by ultrasonography in tall athletes. Prospective cohort observational study. University of Buffalo, Erie County Community College, University of Texas at Tyler, and Austin College. Sixty-six athletes enrolled and finished the study. Height requirements were at least 6 feet 2 inches for men and at least 5 feet 7 inches in women. Measurement of spleen size in tall athletes. Ultrasound measurements of spleen size in tall athletes were compared with "normal-sized" controls from the literature. Mean, SD, and variance determined the sample distribution, and a one sample t test compared measurements in tall athletes with historical measurements in the average height population. Statistical significance was defined as P < 0.05. Mean height was 192.26 cm (SD, ± 6.52) for men and 176.54 cm (SD, ± 5.19) for women. Mean splenic measurements for all subjects were 12.19 cm (SD, ± 1.45) for spleen length, 8.88 cm (SD, ± 0.96) for spleen width, and 5.55 cm (SD, ± 0.76) for spleen thickness. The study mean for spleen length was 12.192 cm (95% confidence interval, 11.835-12.549) and population mean was 8.94 cm (2 tailed t test, P < 0.01). In this population of tall athletes, normal spleen size was significantly larger than the normal spleen size of an average individual. In the clinical arena, it can be difficult to know when the tall athletes with splenomegaly from infectious mononucleosis can safely return to contact sports. Previously, there has not been a sufficient "norm" for this population, but this study helps to establish baseline values.
Kinetic energy distribution of multiply charged ions in Coulomb explosion of Xe clusters.
Heidenreich, Andreas; Jortner, Joshua
2011-02-21
We report on the calculations of kinetic energy distribution (KED) functions of multiply charged, high-energy ions in Coulomb explosion (CE) of an assembly of elemental Xe(n) clusters (average size (n) = 200-2171) driven by ultra-intense, near-infrared, Gaussian laser fields (peak intensities 10(15) - 4 × 10(16) W cm(-2), pulse lengths 65-230 fs). In this cluster size and pulse parameter domain, outer ionization is incomplete∕vertical, incomplete∕nonvertical, or complete∕nonvertical, with CE occurring in the presence of nanoplasma electrons. The KEDs were obtained from double averaging of single-trajectory molecular dynamics simulation ion kinetic energies. The KEDs were doubly averaged over a log-normal cluster size distribution and over the laser intensity distribution of a spatial Gaussian beam, which constitutes either a two-dimensional (2D) or a three-dimensional (3D) profile, with the 3D profile (when the cluster beam radius is larger than the Rayleigh length) usually being experimentally realized. The general features of the doubly averaged KEDs manifest the smearing out of the structure corresponding to the distribution of ion charges, a marked increase of the KEDs at very low energies due to the contribution from the persistent nanoplasma, a distortion of the KEDs and of the average energies toward lower energy values, and the appearance of long low-intensity high-energy tails caused by the admixture of contributions from large clusters by size averaging. The doubly averaged simulation results account reasonably well (within 30%) for the experimental data for the cluster-size dependence of the CE energetics and for its dependence on the laser pulse parameters, as well as for the anisotropy in the angular distribution of the energies of the Xe(q+) ions. Possible applications of this computational study include a control of the ion kinetic energies by the choice of the laser intensity profile (2D∕3D) in the laser-cluster interaction volume.
Polarized Optical Scattering Measurements of Metallic Nanoparticles on a Thin Film Silicon Wafer
NASA Astrophysics Data System (ADS)
Liu, Cheng-Yang; Liu, Tze-An; Fu, Wei-En
2009-09-01
Light scattering has shown its powerful diagnostic capability to characterize optical quality surfaces. In this study, the theory of bidirectional reflectance distribution function (BRDF) was used to analyze the metallic nanoparticles' sizes on wafer surfaces. The BRDF of a surface is defined as the angular distribution of radiance scattered by the surface normalized by the irradiance incident on the surface. A goniometric optical scatter instrument has been developed to perform the BRDF measurements on polarized light scattering on wafer surfaces for the diameter and distribution measurements of metallic nanoparticles. The designed optical scatter instrument is capable of distinguishing various types of optical scattering characteristics, which are corresponding to the diameters of the metallic nanoparticles, near surfaces by using the Mueller matrix calculation. The metallic nanoparticle diameter of measurement is 60 nm on 2 inch thin film wafers. These measurement results demonstrate that the polarization of light scattered by metallic particles can be used to determine the size of metallic nanoparticles on silicon wafers.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
NASA Astrophysics Data System (ADS)
Neto, José Antônio Baptista; Gingele, Franz Xaver; Leipe, Thomas; Brehme, Isa
2006-04-01
Ninety-two surface sediment samples were collected in Guanabara Bay, one of the most prominent urban bays in SE Brazil, to investigate the spatial distribution of anthropogenic pollutants. The concentrations of heavy metals, organic carbon and particle size were examined in all samples. Large spatial variations of heavy metals and particle size were observed. The highest concentrations of heavy metals were found in the muddy sediments from the north western region of the bay near the main outlets of the most polluted rivers, municipal waste drainage systems and one of the major oil refineries. Another anomalous concentration of metals was found adjacent to Rio de Janeiro Harbour. The heavy metal concentrations decrease to the northeast, due to intact rivers and the mangrove systems in this area, and to the south where the sand fraction and open-marine processes dominate. The geochemical normalization of metal data to Li or Al has also demonstrated that the anthropogenic input of heavy metals have altered the natural sediment heavy metal distribution.
Kang, Seju; Jung, Jihyeun; Choe, Jong Kwon; Ok, Yong Sik; Choi, Yongju
2018-04-01
Particle size of biochar may strongly affect the kinetics of hydrophobic organic compound (HOC) sorption. However, challenges exist in characterizing the effect of biochar particle size on the sorption kinetics because of the wide size range of biochar. The present study suggests a novel method to determine a representative value that can be used to show the dependence of HOC sorption kinetics to biochar particle size on the basis of an intra-particle diffusion model. Biochars derived from three different feedstocks are ground and sieved to obtain three daughter products each having different size distributions. Phenanthrene sorption kinetics to the biochars are well described by the intra-particle diffusion model with significantly greater sorption rates observed for finer grained biochars. The time to reach 95% of equilibrium for phenanthrene sorption to biochar is reduced from 4.6-17.9days for the original biochars to <1-4.6days for the powdered biochars with <125μm in size. A moderate linear correlation is found between the inverse square of the representative biochar particle radius obtained using particle size distribution analysis and the apparent phenanthrene sorption rates determined by the sorption kinetics experiments and normalized to account for the variation of the sorption rate-determining factors other than the biochar particle radius. The results suggest that the representative biochar particle radius reasonably describes the dependence of HOC sorption rates on biochar particle size. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodríguez-Climent, Sílvia; Alcaraz, Carles; Caiola, Nuno; Ibáñez, Carles; Nebra, Alfonso; Muñoz-Camarillo, Gloria; Casals, Frederic; Vinyoles, Dolors; de Sostoa, Adolfo
2012-12-01
Multimesh nylon gillnets were set in three Ebro Delta (North-East of Spain) lagoons to determine mesh selectivity for the inhabiting fish community. Each gillnet consisted on a series of twelve panels of different mesh size (ranging from 5.0 to 55.0 mm bar length) randomly distributed. The SELECT method (Share Each Length's Catch Total) was used to estimate retention curves through five models: normal location, normal scale, gamma, lognormal and inverse Gaussian. Each model was fitted twice, under the assumptions of equal and proportional to mesh size fishing effort, but no differences were found between approaches. A possible situation of overfishing in the lagoons, where artisanal fisheries are carried out with a low surveillance effort, was assessed using a vulnerable species inhabiting these brackish waters as case study: the sand smelt, Atherina boyeri. The minimum size for its fishery has not been established, thus remaining under an uncontrolled exploitation situation. Therefore, a Minimum Landing Size (MLS) is proposed based on sexual maturity data. The importance of establishing an adequate MLS and regulate mesh sizes in order to respect natural maturation length is discussed, as well as, the proposal of other measures to improve A. boyeri fishery management.
Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L
2015-12-30
Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
The Influence of Finite-size Sources in Acousto-ultrasonics
NASA Technical Reports Server (NTRS)
Pavlakovic, Brian N.; Rose, Joseph L.
1994-01-01
This work explores the effects that the finite normal axisymmetric traction loading of an infinite isotropic plate has on wave propagation in acousto-ultrasonics (AU), in which guided waves are created using two normal incidence transducers. Although the work also addresses the effects of the transducer pressure distribution and pulse shape, this thesis concentrates on two main questions: how does the transducer's diameter control the phase velocity and frequency spectrum of the response, and how does the plate thickness relate to the plate's excitability? The mathematics of the time-harmonic solution and the physical principles and the practical considerations for AU wave generation are explained. Transient sources are modeled by the linear superposition of the time-harmonic solutions found using the Hankel transform and they are then compared to experimental data to provide insight into the relation between the size of the transducer and the preferred phase velocity.
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
A continuum theory of grain size evolution and damage
NASA Astrophysics Data System (ADS)
Ricard, Y.; Bercovici, D.
2009-01-01
Lithospheric shear localization, as occurs in the formation of tectonic plate boundaries, is often associated with diminished grain size (e.g., mylonites). Grain size reduction is typically attributed to dynamic recrystallization; however, theoretical models of shear localization arising from this hypothesis are problematic because (1) they require the simultaneous action of two creep mechanisms (diffusion and dislocation creep) that occur in different deformation regimes (i.e., in grain size stress space) and (2) the grain growth ("healing") laws employed by these models are derived from normal grain growth or coarsening theory, which are valid in the absence of deformation, although the shear localization setting itself requires deformation. Here we present a new first principles grained-continuum theory, which accounts for both coarsening and damage-induced grain size reduction in a monomineralic assemblage undergoing irrecoverable deformation. Damage per se is the generic process for generation of microcracks, defects, dislocations (including recrystallization), subgrains, nuclei, and cataclastic breakdown of grains. The theory contains coupled macroscopic continuum mechanical and grain-scale statistical components. The continuum level of the theory considers standard mass, momentum, and energy conservation, as well as entropy production, on a statistically averaged grained continuum. The grain-scale element of the theory describes both the evolution of the grain size distribution and mechanisms for both continuous grain growth and discontinuous grain fracture and coalescence. The continuous and discontinuous processes of grain size variation are prescribed by nonequilibrium thermodynamics (in particular, the treatment of entropy production provides the phenomenological laws for grain growth and reduction); grain size evolution thus incorporates the free energy differences between grains, including both grain boundary surface energy (which controls coarsening) and the contribution of deformational work to these free energies (which controls damage). In the absence of deformation, only two mechanisms that increase the average grain size are allowed by the second law of thermodynamics. One mechanism, involving continuous diffusive mass transport from small to large grains, captures the essential components of normal grain growth theories of Lifshitz-Slyosov and Hillert. The second mechanism involves the aggregation of grains and is described using a Smoluchovski formalism. With the inclusion of deformational work and damage, the theory predicts two mechanisms for which the thermodynamic requirement of entropy positivity always forces large grains to shrink and small ones to grow. The first such damage-driven mechanism involving continuous mass transfer from large to small grains tends to homogenize the distribution of grain size toward its initial mean grain size. The second damage mechanism favors the creation of small grains by discontinuous division of larger grains and reduces the mean grain size with time. When considered separately, most of these mechanisms allow for self-similar grain size distributions whose scales (i.e., statistical moments such as the mean, variance, and skewness) can all be described by a single grain scale, such as the mean or maximum. However, the combination of mechanisms, e.g., one that captures the competition between continuous coarsening and mean grain size reduction by breakage, does not generally permit a self-similar solution for the grain size distribution, which contradicts the classic assumption that grain growth laws allowing for both coarsening and recrystallization can be treated with a single grain scale such as the mean size.
Scattering from Rock and Rock Outcrops
2013-09-30
whose orientations and size distributions reflect the internal fault organization of the bedrock. A mathematical model of the leeward side of an...scattering from facets oriented close to normal incidence to the sonar system. Diffraction from sharp edges may also contribute strong scattering that 5 is...collected in a recent field experiment and are currently being analyzed. Figure 5 shows PhD student Derek Olson alongside the photogrammetry system
ERIC Educational Resources Information Center
Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria
2012-01-01
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…
Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel
2018-02-08
Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.
Modeling of stress distributions on the microstructural level in Alloy 600
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozaczek, K.J.; Petrovic, B.G.; Ruud, C.O.
1995-04-01
Stress distribution in a random polycrystalline material (Alloy 600) was studied using a topologically correct microstructural model. Distributions of von Mises and hydrostatic stresses at the grain vertices, which could be important in intergranular stress corrosion cracking, were analyzed as functions of microstructure, grain orientations and loading conditions. Grain size, shape, and orientation had a more pronounced effect on stress distribution than loading conditions. At grain vertices the stress concentration factor was higher for hydrostatic stress (1.7) than for von Mises stress (1.5). The stress/strain distribution in the volume (grain interiors) is a normal distribution and does not depend onmore » the location of the studied material volume i.e., surface vs/bulk. The analysis of stress distribution in the volume showed the von Mises stress concentration of 1.75 and stress concentration of 2.2 for the hydrostatic pressure. The observed stress concentration is high enough to cause localized plastic microdeformation, even when the polycrystalline aggregate is in the macroscopic elastic regime. Modeling of stresses and strains in polycrystalline materials can identify the microstructures (grain size distributions, texture) intrinsically susceptible to stress/strain concentrations and justify the correctness of applied stress state during the stress corrosion cracking tests. Also, it supplies the information necessary to formulate the local failure criteria and interpret of nondestructive stress measurements.« less
NASA Technical Reports Server (NTRS)
Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.
2007-01-01
Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross sections indicate that the random variations in the albedo follow a log-normal distribution quite well. In addition, this distribution appears to be independent of object size over a considerable range in size. Note that this relation appears to hold for debris only, where the shapes and other properties are not primarily the result of human manufacture, but of random processes. With this information in hand, it now becomes possible to estimate the actual size distribution we are sampling from. We have identified two characteristics of the space debris population that make this process tractable and by extension have developed a methodology for performing the transformation.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Time Dependence of Aerosol Light Scattering Downwind of Forest Fires
NASA Astrophysics Data System (ADS)
Kleinman, L. I.; Sedlacek, A. J., III; Wang, J.; Lewis, E. R.; Springston, S. R.; Chand, D.; Shilling, J.; Arnott, W. P.; Freedman, A.; Onasch, T. B.; Fortner, E.; Zhang, Q.; Yokelson, R. J.; Adachi, K.; Buseck, P. R.
2017-12-01
In the first phase of BBOP (Biomass Burn Observation Project), a Department of Energy (DOE) sponsored study, wildland fires in the Pacific Northwest were sampled from the G-1 aircraft via sequences of transects that encountered emission whose age (time since emission) ranged from approximately 15 minutes to four hours. Comparisons between transects allowed us to determine the near-field time evolution of trace gases, aerosol particles, and optical properties. The fractional increase in aerosol concentration with plume age was typically less than a third of the fractional increase in light scattering. In some fires the increase in light scattering exceeded a factor of two. Two possible causes for the discrepancy between scattering and aerosol mass are i) the downwind formation of refractory tar balls that are not detected by the AMS and therefore contribute to scattering but not to aerosol mass and ii) changes to the aerosol size distribution. Both possibilities are considered. Our information on tar balls comes from an analysis of TEM grids. A direct determination of size changes is complicated by extremely high aerosol number concentrations that caused coincidence problems for the PCASP and UHSAS probes. We instead construct a set of plausible log normal size distributions and for each member of the set do Mie calculations to determine mass scattering efficiency (MSE), angstrom exponents, and backscatter ratios. Best fit size distributions are selected by comparison with observed data derived from multi-wavelength scattering measurements, an extrapolated FIMS size distribution, and mass measurements from an SP-AMS. MSE at 550 nm varies from a typical near source value of 2-3 to about 4 in aged air.
Rufeil-Fiori, Elena; Banchio, Adolfo J
2018-03-07
In lipid monolayers with phase coexistence, domains of the liquid-condensed phase always present size polydispersity. However, very few theoretical works consider size distribution effects on the monolayer properties. Because of the difference in surface densities, domains have excess dipolar density with respect to the surrounding liquid expanded phase, originating a dipolar inter-domain interaction. This interaction depends on the domain area, and hence the presence of a domain size distribution is associated with interaction polydispersity. Inter-domain interactions are fundamental to understanding the structure and dynamics of the monolayer. For this reason, it is expected that polydispersity significantly alters monolayer properties. By means of Brownian dynamics simulations, we study the radial distribution function (RDF), the average mean square displacement and the average time-dependent self-diffusion coefficient, D(t), of lipid monolayers with normally distributed size domains. For this purpose, we vary the relevant system parameters, polydispersity and interaction strength, within a range of experimental interest. We also analyze the consequences of using a monodisperse model to determine the interaction strength from an experimental RDF. We find that polydispersity strongly affects the value of the interaction strength, which is greatly underestimated if polydispersity is not considered. However, within a certain range of parameters, the RDF obtained from a polydisperse model can be well approximated by that of a monodisperse model, by suitably fitting the interaction strength, even for 40% polydispersities. For small interaction strengths or small polydispersities, the polydisperse systems obtained from fitting the experimental RDF have an average mean square displacement and D(t) in good agreement with that of the monodisperse system.
NASA Technical Reports Server (NTRS)
Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)
2001-01-01
A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.
Growth hormone receptor deficiency (Laron syndrome): clinical and genetic characteristics.
Guevara-Aguirre, J; Rosenbloom, A L; Vaccarello, M A; Fielder, P J; de la Vega, A; Diamond, F B; Rosenfeld, R G
1991-01-01
Approximately 60 cases of GHRD (Laron syndrome) were reported before 1990 and half of these were from Israel. We have described 47 additional patients from an inbred population of South Ecuador and have emphasized certain clinical features including: markedly advanced osseous maturation for height age; normal body proportions in childhood but child-like proportions in adults; much greater deviation of stature than head size, giving an appearance of large cranium and small facies; underweight in childhood despite the appearance of obesity and true obesity in adulthood; blue scleras; and limited elbow extension. The Ecuadorean patients differed markedly and most importantly from the other large concentration, in Israel, by being of normal or superior intelligence, suggesting a unique linkage in the Ecuadorean population. The Ecuadorean population also differed in that those patients coming from Loja province had a markedly skewed sex ratio (19 females: 2 males), while those from El Oro province had a normal sex distribution (14 females: 12 males). The phenotypic similarity between the El Oro and Loja patients indicates that this abnormal sex distribution is not a direct result of the GHRD.
Statistical distribution of building lot frontage: application for Tokyo downtown districts
NASA Astrophysics Data System (ADS)
Usui, Hiroyuki
2018-03-01
The frontage of a building lot is the determinant factor of the residential environment. The statistical distribution of building lot frontages shows how the perimeters of urban blocks are shared by building lots for a given density of buildings and roads. For practitioners in urban planning, this is indispensable to identify potential districts which comprise a high percentage of building lots with narrow frontage after subdivision and to reconsider the appropriate criteria for the density of buildings and roads as residential environment indices. In the literature, however, the statistical distribution of building lot frontages and the density of buildings and roads has not been fully researched. In this paper, based on the empirical study in the downtown districts of Tokyo, it is found that (1) a log-normal distribution fits the observed distribution of building lot frontages better than a gamma distribution, which is the model of the size distribution of Poisson Voronoi cells on closed curves; (2) the statistical distribution of building lot frontages statistically follows a log-normal distribution, whose parameters are the gross building density, road density, average road width, the coefficient of variation of building lot frontage, and the ratio of the number of building lot frontages to the number of buildings; and (3) the values of the coefficient of variation of building lot frontages, and that of the ratio of the number of building lot frontages to that of buildings are approximately equal to 0.60 and 1.19, respectively.
Vadlja, Denis; Koller, Martin; Novak, Mario; Braunegg, Gerhart; Horvat, Predrag
2016-12-01
Statistical distribution of cell and poly[3-(R)-hydroxybutyrate] (PHB) granule size and number of granules per cell are investigated for PHB production in a five-stage cascade (5CSTR). Electron microscopic pictures of cells from individual cascade stages (R1-R5) were converted to binary pictures to visualize footprint areas for polyhydroxyalkanoate (PHA) and non-PHA biomass. Results for each stage were correlated to the corresponding experimentally determined kinetics (specific growth rate μ and specific productivity π). Log-normal distribution describes PHA granule size dissimilarity, whereas for R1 and R4, gamma distribution best reflects the situation. R1, devoted to balanced biomass synthesis, predominately contains cells with rather small granules, whereas with increasing residence time τ, maximum and average granule sizes by trend increase, approaching an upper limit determined by the cell's geometry. Generally, an increase of intracellular PHA content and ratio of granule to cell area slow down along the cascade. Further, the number of granules per cell decreases with increasing τ. Data for μ and π obtained by binary picture analysis correlate well with the experimental results. The work describes long-term continuous PHA production under balanced, transient, and nutrient-deficient conditions, as well as their reflection on the granules size, granule number, and cell structure on the microscopic level.
Impact of dose size in single fraction spatially fractionated (grid) radiotherapy for melanoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hualin, E-mail: hualin.zhang@northwestern.edu, E-mail: hualinzhang@yahoo.com; Zhong, Hualiang; Barth, Rolf F.
2014-02-15
Purpose: To evaluate the impact of dose size in single fraction, spatially fractionated (grid) radiotherapy for selectively killing infiltrated melanoma cancer cells of different tumor sizes, using different radiobiological models. Methods: A Monte Carlo technique was employed to calculate the 3D dose distribution of a commercially available megavoltage grid collimator in a 6 MV beam. The linear-quadratic (LQ) and modified linear quadratic (MLQ) models were used separately to evaluate the therapeutic outcome of a series of single fraction regimens that employed grid therapy to treat both acute and late responding melanomas of varying sizes. The dose prescription point was atmore » the center of the tumor volume. Dose sizes ranging from 1 to 30 Gy at 100% dose line were modeled. Tumors were either touching the skin surface or having their centers at a depth of 3 cm. The equivalent uniform dose (EUD) to the melanoma cells and the therapeutic ratio (TR) were defined by comparing grid therapy with the traditional open debulking field. The clinical outcomes from recent reports were used to verify the authors’ model. Results: Dose profiles at different depths and 3D dose distributions in a series of 3D melanomas treated with grid therapy were obtained. The EUDs and TRs for all sizes of 3D tumors involved at different doses were derived through the LQ and MLQ models, and a practical equation was derived. The EUD was only one fifth of the prescribed dose. The TR was dependent on the prescribed dose and on the LQ parameters of both the interspersed cancer and normal tissue cells. The results from the LQ model were consistent with those of the MLQ model. At 20 Gy, the EUD and TR by the LQ model were 2.8% higher and 1% lower than by the MLQ, while at 10 Gy, the EUD and TR as defined by the LQ model were only 1.4% higher and 0.8% lower, respectively. The dose volume histograms of grid therapy for a 10 cm tumor showed different dosimetric characteristics from those of conventional radiotherapy. A significant portion of the tumor volume received a very large dose in grid therapy, which ensures significant tumor cell killing in these regions. Conversely, some areas received a relatively small dose, thereby sparing interspersed normal cells and increasing radiation tolerance. The radiobiology modeling results indicated that grid therapy could be useful for treating acutely responding melanomas infiltrating radiosensitive normal tissues. The theoretical model predictions were supported by the clinical outcomes. Conclusions: Grid therapy functions by selectively killing infiltrating tumor cells and concomitantly sparing interspersed normal cells. The TR depends on the radiosensitivity of the cell population, dose, tumor size, and location. Because the volumes of very high dose regions are small, the LQ model can be used safely to predict the clinical outcomes of grid therapy. When treating melanomas with a dose of 15 Gy or higher, single fraction grid therapy is clearly advantageous for sparing interspersed normal cells. The existence of a threshold fraction dose, which was found in the authors’ theoretical simulations, was confirmed by clinical observations.« less
Fractional Fourier transform of Lorentz-Gauss vortex beams
NASA Astrophysics Data System (ADS)
Zhou, GuoQuan; Wang, XiaoGang; Chu, XiuXiang
2013-08-01
An analytical expression for a Lorentz-Gauss vortex beam passing through a fractional Fourier transform (FRFT) system is derived. The influences of the order of the FRFT and the topological charge on the normalized intensity distribution, the phase distribution, and the orbital angular momentum density of a Lorentz-Gauss vortex beam in the FRFT plane are examined. The order of the FRFT controls the beam spot size, the orientation of the beam spot, the spiral direction of the phase distribution, the spatial orientation of the two peaks in the orbital angular momentum density distribution, and the magnitude of the orbital angular momentum density. The increase of the topological charge not only results in the dark-hollow region becoming large, but also brings about detail changes in the beam profile. The spatial orientation of the two peaks in the orbital angular momentum density distribution and the phase distribution also depend on the topological charge.
Measurement of stress distributions in truck tyre contact patch in real rolling conditions
NASA Astrophysics Data System (ADS)
Anghelache, Gabriel; Moisescu, Raluca
2012-12-01
Stress distributions on three orthogonal directions have been measured across the contact patch of truck tyres using the complex measuring system that contains a transducer assembly with 30 sensing elements placed in the road surface. The measurements have been performed in straight line, in real rolling conditions. Software applications for calibration, data acquisition, and data processing were developed. The influence of changes in inflation pressure and rolling speed on the shapes and sizes of truck tyre contact patch has been shown. The shapes and magnitudes of normal, longitudinal, and lateral stress distributions, measured at low speed, have been presented and commented. The effect of wheel toe-in and camber on the stress distribution results was observed. The paper highlights the impact of the longitudinal tread ribs on the shear stress distributions. The ratios of stress distributions in the truck tyre contact patch have been computed and discussed.
NASA Astrophysics Data System (ADS)
Selvadurai, P. A.; Parker, J. M.; Glaser, S. D.
2017-12-01
A better understanding of how slip accumulates along faults and its relation to the breakdown of shear stress is beneficial to many engineering disciplines, such as, hydraulic fracture and understanding induced seismicity (among others). Asperities forming along a preexisting fault resist the relative motion of the two sides of the interface and occur due to the interaction of the surface topographies. Here, we employ a finite element model to simulate circular partial slip asperities along a nominally flat frictional interface. Shear behavior of our partial slip asperity model closely matched the theory described by Cattaneo. The asperity model was employed to simulate a small section of an experimental fault formed between two bodies of polymethyl methacrylate, which consisted of multiple asperities whose location and sizes were directly measured using a pressure sensitive film. The quasi-static shear behavior of the interface was modeled for cyclical loading conditions, and the frictional dissipation (hysteresis) was normal stress dependent. We further our understanding by synthetically modeling lognormal size distributions of asperities that were randomly distributed in space. Synthetic distributions conserved the real contact area and aspects of the size distributions from the experimental case, allowing us to compare the constitutive behaviors based solely on spacing effects. Traction-slip behavior of the experimental interface appears to be considerably affected by spatial clustering of asperities that was not present in the randomly spaced, synthetic asperity distributions. Estimates of bulk interfacial shear stiffness were determined from the constitutive traction-slip behavior and were comparable to the theoretical estimates of multi-contact interfaces with non-interacting asperities.
Evaluation of Low-Gravity Smoke Particulate for Spacecraft Fire Detection
NASA Technical Reports Server (NTRS)
Urban, David; Ruff, Gary A.; Mulholland George; Meyer, Marit; Yuan, Zeng guang; Cleary, Thomas; Yang, Jiann; Greenberg, Paul; Bryg, Victoria
2013-01-01
Tests were conducted on the International Space Station to evaluate the smoke particulate size from materials and conditions that are typical of those expected in spacecraft fires. Five different materials representative of those found in spacecraft (Teflon, Kapton, cotton, silicone rubber and Pyrell) were heated to temperatures below the ignition point with conditions controlled to provide repeatable sample surface temperatures and air flow. The air flow past the sample during the heating period ranged from quiescent to 8 cm/s. The effective transport time to the measurement instruments was varied from 11 to 800 seconds to simulate different smoke transport conditions in spacecraft. The resultant aerosol was evaluated by three instruments which measured different moments of the particle size distribution. These moment diagnostics were used to determine the particle number concentration (zeroth moment), the diameter concentration (first moment), and the mass concentration (third moment). These statistics were combined to determine the diameter of average mass and the count mean diameter and by assuming a log-normal distribution, the geometric mean diameter and the geometric standard deviations were also calculated. Smoke particle samples were collected on TEM grids using a thermal precipitator for post flight analysis. The TEM grids were analyzed to determine the particle morphology and shape parameters. The different materials produced particles with significantly different morphologies. Overall the majority of the average smoke particle sizes were found to be in the 200 to 400 nanometer range with the quiescent cases and the cases with increased transport time typically producing with substantially larger particles. The results varied between materials but the smoke particles produced in low gravity were typically twice the size of particles produced in normal gravity. These results can be used to establish design requirements for future spacecraft smoke detectors.
Oliveira, Marcos L S; Navarro, Orlando G; Crissien, Tito J; Tutikian, Bernardo F; da Boit, Kátia; Teixeira, Elba C; Cabello, Juan J; Agudelo-Castañeda, Dayana M; Silva, Luis F O
2017-10-01
There are multiple elements which enable coal geochemistry: (1) boiler and pollution control system design parameters, (2) temperature of flue gas at collection point, (3) feed coal and also other fuels like petroleum coke, tires and biomass geochemistry and (4) fuel feed particle size distribution homogeneity distribution, maintenance of pulverisers, etc. Even though there is a large number of hazardous element pollutants in the coal-processing industry, investigations on micrometer and nanometer-sized particles including their aqueous colloids formation reactions and their behaviour entering the environment are relatively few in numbers. X-ray diffraction (XRD), High Resolution-Transmission Electron microscopy (HR-TEM)/ (Energy Dispersive Spectroscopy) EDS/ (selected-area diffraction pattern) SAED, Field Emission-Scanning Electron Microscopy (FE-SEM)/EDS and granulometric distribution analysis were used as an integrated characterization techniques tool box to determine both geochemistry and nanomineralogy for coal fly ashes (CFAs) from Brazil´s largest coal power plant. Ultrafine/nano-particles size distribution from coal combustion emissions was estimated during the tests. In addition the iron and silicon content was determined as 54.6% of the total 390 different particles observed by electron bean, results aimed that these two particles represent major minerals in the environment particles normally. These data may help in future investigations to asses human health actions related with nano-particles. Copyright © 2017 Elsevier Inc. All rights reserved.
Stem cells are dispensable for lung homeostasis but restore airways after injury.
Giangreco, Adam; Arwert, Esther N; Rosewell, Ian R; Snyder, Joshua; Watt, Fiona M; Stripp, Barry R
2009-06-09
Local tissue stem cells have been described in airways of the lung but their contribution to normal epithelial maintenance is currently unknown. We therefore developed aggregation chimera mice and a whole-lung imaging method to determine the relative contributions of progenitor (Clara) and bronchiolar stem cells to epithelial maintenance and repair. In normal and moderately injured airways chimeric patches were small in size and not associated with previously described stem cell niches. This finding suggested that single, randomly distributed progenitor cells maintain normal epithelial homeostasis. In contrast we found that repair following severe lung injury resulted in the generation of rare, large clonal cell patches that were associated with stem cell niches. This study provides evidence that epithelial stem cells are dispensable for normal airway homeostasis. We also demonstrate that stem cell activation and robust clonal cellular expansion occur only during repair from severe lung injury.
About normal distribution on SO(3) group in texture analysis
NASA Astrophysics Data System (ADS)
Savyolova, T. I.; Filatov, S. V.
2017-12-01
This article studies and compares different normal distributions (NDs) on SO(3) group, which are used in texture analysis. Those NDs are: Fisher normal distribution (FND), Bunge normal distribution (BND), central normal distribution (CND) and wrapped normal distribution (WND). All of the previously mentioned NDs are central functions on SO(3) group. CND is a subcase for normal CLT-motivated distributions on SO(3) (CLT here is Parthasarathy’s central limit theorem). WND is motivated by CLT in R 3 and mapped to SO(3) group. A Monte Carlo method for modeling normally distributed values was studied for both CND and WND. All of the NDs mentioned above are used for modeling different components of crystallites orientation distribution function in texture analysis.
NASA Astrophysics Data System (ADS)
Leblanc, Sylvain G.
2002-12-01
A plant canopy gap-size analyzer, the Tracing Radiation and Architecture of Canopies (TRAC), developed by Chen and Cihlar [Appl. Opt. 34, 6211 (1995)] and commercialized by 3rd Wave Engineering (Nepean, Canada), has been used around the world to quantify the fraction of photosynthetically active radiation absorbed by plant canopies, the leaf area index (LAI), and canopy architectural parameters. The TRAC is walked under a canopy along transects to measure sunflecks that are converted into a gap-size distribution. A numerical gap-removal technique is performed to remove gaps that are not theoretically possible in a random canopy. The resulting reduced gap-size distribution is used to quantify the heterogeneity of the canopy and to improve LAI measurements. It is explicitly shown here that the original derivation of the clumping index was missing a normalization factor. For a very clumped canopy with a large gap fraction, the resulting LAI can be more than 100% smaller than previously estimated. A test case is used to demonstrate that the new clumping index derivation allows a more accurate change of LAI to be measured.
Micro/Nano-pore Network Analysis of Gas Flow in Shale Matrix
Zhang, Pengwei; Hu, Liming; Meegoda, Jay N.; Gao, Shengyan
2015-01-01
The gas flow in shale matrix is of great research interests for optimized shale gas extraction. The gas flow in the nano-scale pore may fall in flow regimes such as viscous flow, slip flow and Knudsen diffusion. A 3-dimensional nano-scale pore network model was developed to simulate dynamic gas flow, and to describe the transient properties of flow regimes. The proposed pore network model accounts for the various size distributions and low connectivity of shale pores. The pore size, pore throat size and coordination number obey normal distribution, and the average values can be obtained from shale reservoir data. The gas flow regimes were simulated using an extracted pore network backbone. The numerical results show that apparent permeability is strongly dependent on pore pressure in the reservoir and pore throat size, which is overestimated by low-pressure laboratory tests. With the decrease of reservoir pressure, viscous flow is weakening, then slip flow and Knudsen diffusion are gradually becoming dominant flow regimes. The fingering phenomenon can be predicted by micro/nano-pore network for gas flow, which provides an effective way to capture heterogeneity of shale gas reservoir. PMID:26310236
Micro/Nano-pore Network Analysis of Gas Flow in Shale Matrix.
Zhang, Pengwei; Hu, Liming; Meegoda, Jay N; Gao, Shengyan
2015-08-27
The gas flow in shale matrix is of great research interests for optimized shale gas extraction. The gas flow in the nano-scale pore may fall in flow regimes such as viscous flow, slip flow and Knudsen diffusion. A 3-dimensional nano-scale pore network model was developed to simulate dynamic gas flow, and to describe the transient properties of flow regimes. The proposed pore network model accounts for the various size distributions and low connectivity of shale pores. The pore size, pore throat size and coordination number obey normal distribution, and the average values can be obtained from shale reservoir data. The gas flow regimes were simulated using an extracted pore network backbone. The numerical results show that apparent permeability is strongly dependent on pore pressure in the reservoir and pore throat size, which is overestimated by low-pressure laboratory tests. With the decrease of reservoir pressure, viscous flow is weakening, then slip flow and Knudsen diffusion are gradually becoming dominant flow regimes. The fingering phenomenon can be predicted by micro/nano-pore network for gas flow, which provides an effective way to capture heterogeneity of shale gas reservoir.
Preserved Network Metrics across Translated Texts
NASA Astrophysics Data System (ADS)
Cabatbat, Josephine Jill T.; Monsanto, Jica P.; Tapang, Giovanni A.
2014-09-01
Co-occurrence language networks based on Bible translations and the Universal Declaration of Human Rights (UDHR) translations in different languages were constructed and compared with random text networks. Among the considered network metrics, the network size, N, the normalized betweenness centrality (BC), and the average k-nearest neighbors, knn, were found to be the most preserved across translations. Moreover, similar frequency distributions of co-occurring network motifs were observed for translated texts networks.
Gonzato, Carlo; Semsarilar, Mona; Jones, Elizabeth R; Li, Feng; Krooshof, Gerard J P; Wyman, Paul; Mykhaylyk, Oleksandr O; Tuinier, Remco; Armes, Steven P
2014-08-06
Block copolymer self-assembly is normally conducted via post-polymerization processing at high dilution. In the case of block copolymer vesicles (or "polymersomes"), this approach normally leads to relatively broad size distributions, which is problematic for many potential applications. Herein we report the rational synthesis of low-polydispersity diblock copolymer vesicles in concentrated solution via polymerization-induced self-assembly using reversible addition-fragmentation chain transfer (RAFT) polymerization of benzyl methacrylate. Our strategy utilizes a binary mixture of a relatively long and a relatively short poly(methacrylic acid) stabilizer block, which become preferentially expressed at the outer and inner poly(benzyl methacrylate) membrane surface, respectively. Dynamic light scattering was utilized to construct phase diagrams to identify suitable conditions for the synthesis of relatively small, low-polydispersity vesicles. Small-angle X-ray scattering (SAXS) was used to verify that this binary mixture approach produced vesicles with significantly narrower size distributions compared to conventional vesicles prepared using a single (short) stabilizer block. Calculations performed using self-consistent mean field theory (SCMFT) account for the preferred self-assembled structures of the block copolymer binary mixtures and are in reasonable agreement with experiment. Finally, both SAXS and SCMFT indicate a significant degree of solvent plasticization for the membrane-forming poly(benzyl methacrylate) chains.
Viscosity and transient electric birefringence study of clay colloidal aggregation.
Bakk, Audun; Fossum, Jon O; da Silva, Geraldo J; Adland, Hans M; Mikkelsen, Arne; Elgsaeter, Arnljot
2002-02-01
We study a synthetic clay suspension of laponite at different particle and NaCl concentrations by measuring stationary shear viscosity and transient electrically induced birefringence (TEB). On one hand the viscosity data are consistent with the particles being spheres and the particles being associated with large amount bound water. On the other hand the viscosity data are also consistent with the particles being asymmetric, consistent with single laponite platelets associated with a very few monolayers of water. We analyze the TEB data by employing two different models of aggregate size (effective hydrodynamic radius) distribution: (1) bidisperse model and (2) log-normal distributed model. Both models fit, in the same manner, fairly well to the experimental TEB data and they indicate that the suspension consists of polydisperse particles. The models also appear to confirm that the aggregates increase in size vs increasing ionic strength. The smallest particles at low salt concentrations seem to be monomers and oligomers.
Ice recrystallization inhibition in ice cream by propylene glycol monostearate.
Aleong, J M; Frochot, S; Goff, H D
2008-11-01
The effectiveness of propylene glycol monostearate (PGMS) to inhibit ice recrystallization was evaluated in ice cream and frozen sucrose solutions. PGMS (0.3%) dramatically reduced ice crystal sizes in ice cream and in sucrose solutions frozen in a scraped-surface freezer before and after heat shock, but had no effect in quiescently frozen solutions. PGMS showed limited emulsifier properties by promoting smaller fat globule size distributions and enhanced partial coalescence in the mix and ice cream, respectively, but at a much lower level compared to conventional ice cream emulsifier. Low temperature scanning electron microscopy revealed highly irregular crystal morphology in both ice cream and sucrose solutions frozen in a scraped-surface freezer. There was strong evidence to suggest that PGMS directly interacts with ice crystals and interferes with normal surface propagation. Shear during freezing may be required for its distribution around the ice and sufficient surface coverage.
Estimation of Apollo Lunar Dust Transport using Optical Extinction Measurements
NASA Astrophysics Data System (ADS)
Lane, John E.; Metzger, Philip T.
2015-04-01
A technique to estimate mass erosion rate of surface soil during landing of the Apollo Lunar Module (LM) and total mass ejected due to the rocket plume interaction is proposed and tested. The erosion rate is proportional to the product of the second moment of the lofted particle size distribution N(D), and third moment of the normalized soil size distribution S(D), divided by the integral of S(D)ṡD2/v(D), where D is particle diameter and v(D) is the vertical component of particle velocity. The second moment of N(D) is estimated by optical extinction analysis of the Apollo cockpit video. Because of the similarity between mass erosion rate of soil as measured by optical extinction and rainfall rate as measured by radar reflectivity, traditional NWS radar/rainfall correlation methodology can be applied to the lunar soil case where various S(D) models are assumed corresponding to specific lunar sites.
NASA Astrophysics Data System (ADS)
Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko
2015-04-01
Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z-R) and radar reflectivity-specific attenuation (Z-k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.
NASA Astrophysics Data System (ADS)
Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko
2014-11-01
Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z - R) and radar reflectivity-specific attenuation (Z - k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.
Fire frequency, area burned, and severity: A quantitative approach to defining a normal fire year
Lutz, J.A.; Key, C.H.; Kolden, C.A.; Kane, J.T.; van Wagtendonk, J.W.
2011-01-01
Fire frequency, area burned, and fire severity are important attributes of a fire regime, but few studies have quantified the interrelationships among them in evaluating a fire year. Although area burned is often used to summarize a fire season, burned area may not be well correlated with either the number or ecological effect of fires. Using the Landsat data archive, we examined all 148 wildland fires (prescribed fires and wildfires) >40 ha from 1984 through 2009 for the portion of the Sierra Nevada centered on Yosemite National Park, California, USA. We calculated mean fire frequency and mean annual area burned from a combination of field- and satellite-derived data. We used the continuous probability distribution of the differenced Normalized Burn Ratio (dNBR) values to describe fire severity. For fires >40 ha, fire frequency, annual area burned, and cumulative severity were consistent in only 13 of 26 years (50 %), but all pair-wise comparisons among these fire regime attributes were significant. Borrowing from long-established practice in climate science, we defined "fire normals" to be the 26 year means of fire frequency, annual area burned, and the area under the cumulative probability distribution of dNBR. Fire severity normals were significantly lower when they were aggregated by year compared to aggregation by area. Cumulative severity distributions for each year were best modeled with Weibull functions (all 26 years, r2 ??? 0.99; P < 0.001). Explicit modeling of the cumulative severity distributions may allow more comprehensive modeling of climate-severity and area-severity relationships. Together, the three metrics of number of fires, size of fires, and severity of fires provide land managers with a more comprehensive summary of a given fire year than any single metric.
NASA Astrophysics Data System (ADS)
Biteau, J.; Giebels, B.
2012-12-01
Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.
Coverage dependent molecular assembly of anthraquinone on Au(111)
NASA Astrophysics Data System (ADS)
DeLoach, Andrew S.; Conrad, Brad R.; Einstein, T. L.; Dougherty, Daniel B.
2017-11-01
A scanning tunneling microscopy study of anthraquinone (AQ) on the Au(111) surface shows that the molecules self-assemble into several structures depending on the local surface coverage. At high coverages, a close-packed saturated monolayer is observed, while at low coverages, mobile surface molecules coexist with stable chiral hexamer clusters. At intermediate coverages, a disordered 2D porous network interlinking close-packed islands is observed in contrast to the giant honeycomb networks observed for the same molecule on Cu(111). This difference verifies the predicted extreme sensitivity [J. Wyrick et al., Nano Lett. 11, 2944 (2011)] of the pore network to small changes in the surface electronic structure. Quantitative analysis of the 2D pore network reveals that the areas of the vacancy islands are distributed log-normally. Log-normal distributions are typically associated with the product of random variables (multiplicative noise), and we propose that the distribution of pore sizes for AQ on Au(111) originates from random linear rate constants for molecules to either desorb from the surface or detach from the region of a nucleated pore.
Coverage dependent molecular assembly of anthraquinone on Au(111).
DeLoach, Andrew S; Conrad, Brad R; Einstein, T L; Dougherty, Daniel B
2017-11-14
A scanning tunneling microscopy study of anthraquinone (AQ) on the Au(111) surface shows that the molecules self-assemble into several structures depending on the local surface coverage. At high coverages, a close-packed saturated monolayer is observed, while at low coverages, mobile surface molecules coexist with stable chiral hexamer clusters. At intermediate coverages, a disordered 2D porous network interlinking close-packed islands is observed in contrast to the giant honeycomb networks observed for the same molecule on Cu(111). This difference verifies the predicted extreme sensitivity [J. Wyrick et al., Nano Lett. 11, 2944 (2011)] of the pore network to small changes in the surface electronic structure. Quantitative analysis of the 2D pore network reveals that the areas of the vacancy islands are distributed log-normally. Log-normal distributions are typically associated with the product of random variables (multiplicative noise), and we propose that the distribution of pore sizes for AQ on Au(111) originates from random linear rate constants for molecules to either desorb from the surface or detach from the region of a nucleated pore.
NASA Astrophysics Data System (ADS)
Ma, Xiaoping; Langelier, Brian; Gault, Baptiste; Subramanian, Sundaresa
2017-05-01
The role of Nb in normalized and tempered Ti-bearing 13Cr5Ni2Mo super martensitic stainless steel is investigated through in-depth characterization of the bimodal chemistry and size of Nb-rich precipitates/atomic clusters and Nb in solid solution. Transmission electron microscopy and atom probe tomography are used to analyze the samples and clarify precipitates/atom cluster interactions with dislocations and austenite grain boundaries. The effect of 0.1 wt pct Nb addition on the promotion of (Ti, Nb)N-Nb(C,N) composite precipitates, as well as the retention of Nb in solution after cooling to room temperature, are analyzed quantitatively. (Ti, Nb)N-Nb(C,N) composite precipitates with average diameters of approximately 24 ± 8 nm resulting from epitaxial growth of Nb(C,N) on pre-existing (Ti,Nb)N particles, with inter-particle spacing on the order of 205 ± 68 nm, are found to be associated with mean austenite grain size of 28 ± 10 µm in the sample normalized at 1323 K (1050 °C). The calculated Zener limiting austenite grain size of 38 ± 13 µm is in agreement with the experimentally observed austenite grain size distribution. 0.08 wt pct Nb is retained in the as-normalized condition, which is able to promote Nb(C, N) atomic clusters at dislocations during tempering at 873 K (600 °C) for 2 hours, and increases the yield strength by 160 MPa, which is predicted to be close to maximum increase in strengthening effect. Retention of solute Nb before tempering also leads to it preferentially combing with C and N to form Nb(C, N) atom clusters, which suppresses the occurrence of Cr- and Mo-rich carbides during tempering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henager, Charles H.; Alvine, Kyle J.; Bliss, Mary
2014-10-01
A section of a vertical gradient freeze CZT boule approximately 2100-mm 3 with a planar area of 300-mm 2 was prepared and examined using transmitted IR microscopy at various magnifications to determine the three-dimensional spatial and size distributions of Te-particles over large longitudinal and radial length scales. The boule section was approximately 50-mm wide by 60-mm in length by 7-mm thick and was doubly polished for TIR work. Te-particles were imaged through the thickness using extended focal imaging to locate the particles in thickness planes spaced 15-µm apart and then in plane of the image using xy-coordinates of the particlemore » center of mass so that a true three dimensional particle map was assembled for a 1-mm by 45-mm longitudinal strip and for a 1-mm by 50-mm radial strip. Te-particle density distributions were determined as a function of longitudinal and radial positions in these strips, and treating the particles as vertices of a network created a 3D image of the particle spatial distribution. Te-particles exhibited a multi-modal log-normal size density distribution that indicated a slight preference for increasing size with longitudinal growth time, while showing a pronounced cellular network structure throughout the boule that can be correlated to dislocation network sizes in CZT. Higher magnification images revealed a typical Rayleigh-instability pearl string morphology with large and small satellite droplets. This study includes solidification experiments in small crucibles of 30:70 mixtures of Cd:Te to reduce the melting point below 1273 K (1000°C). These solidification experiments were performed over a wide range of cooling rates and clearly demonstrated a growth instability with Te-particle capture that is suggested to be responsible for one of the peaks in the size distribution using size discrimination visualization. The results are discussed with regard to a manifold Te-particle genesis history as 1) Te-particle direct capture from melt-solid growth instabilities, 2) Te-particle formation from dislocation core diffusion and the formation and breakup of Te-tubes, and 3) Te-particle formation due to classical nucleation and growth as precipitates.« less
Viscosity scaling in concentrated dispersions and its impact on colloidal aggregation.
Nicoud, Lucrèce; Lattuada, Marco; Lazzari, Stefano; Morbidelli, Massimo
2015-10-07
Gaining fundamental knowledge about diffusion in crowded environments is of great relevance in a variety of research fields, including reaction engineering, biology, pharmacy and colloid science. In this work, we determine the effective viscosity experienced by a spherical tracer particle immersed in a concentrated colloidal dispersion by means of Brownian dynamics simulations. We characterize how the effective viscosity increases from the solvent viscosity for small tracer particles to the macroscopic viscosity of the dispersion when large tracer particles are employed. Our results show that the crossover between these two regimes occurs at a tracer particle size comparable to the host particle size. In addition, it is found that data points obtained in various host dispersions collapse on one master curve when the normalized effective viscosity is plotted as a function of the ratio between the tracer particle size and the mean host particle size. In particular, this master curve was obtained by varying the volume fraction, the average size and the polydispersity of the host particle distribution. Finally, we extend these results to determine the size dependent effective viscosity experienced by a fractal cluster in a concentrated colloidal system undergoing aggregation. We include this scaling of the effective viscosity in classical aggregation kernels, and we quantify its impact on the kinetics of aggregate growth as well as on the shape of the aggregate distribution by means of population balance equation calculations.
Resilience-based optimal design of water distribution network
NASA Astrophysics Data System (ADS)
Suribabu, C. R.
2017-11-01
Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.
Experimental Rock-on-Rock Abrasive Wear Under Aqueous Conditions: its Role in Subglacial Abrasion
NASA Astrophysics Data System (ADS)
Rutter, E. H.; Lee, A. G.
2003-12-01
We have determined experimentally the rate of abrasive wear of rock on rock for a range of rock types as a function of normal stress and shear displacement. Unlike abrasive wear in fault zones, where wear products accumulate as a thickening gouge zone, in our experiments wear particles were removed by flowing water. The experiments are thus directly pertinent to one of the most important processes in subglacial erosion, and to some extent in river incision. Wear was produced between rotating discs machined from rock samples and measured from the progressive approach of the disc axes towards each other under various levels of normal load. Shear displacements of several km were produced. Optical and scanning electron microscopy were used to study the worn rock surfaces, and particle size distributions in wear products were characterized using a laser particle size analyzer. Rock types studied were sandstones of various porosities and cement characteristics, schists and a granite. In all cases abrasion rate decreased logarithmically with displacement by up to 2 orders of magnitude until a steady state was approached, but only after at least 1 km displacement. The more porous, less-well cemented rocks wore fastest. Amount of abrasion could be characterized quantitatively using an exponentially decaying plus a steady-state term. Wear rate increased non-linearly with normal contact stress, apparently to an asymptote defined by the unconfined compressive strength. Microstructural study showed that the well-cemented and/or lowest porosity rocks wore by progressive abrasion of grains without plucking, whereas whole grains were plucked out of weakly-cemented and/or more porous rocks. This difference in behavior was reflected in wear-product particle size distributions. Where whole-grain plucking was possible, wear products were dominated by particles of the original grain size rather than finer rock flour. Comparison of our results to glacier basal abrasive wear estimated from suspended sediment load (Findeln Glacier, Switzerland) showed the steady-state experimental data seriously to underestimate the natural wear rate. This suggests continuous resetting of the subglacial surface occurs, so that wear is continuously in the 'running-in' stage.
Veale, David; Miles, Sarah; Bramley, Sally; Muir, Gordon; Hodsoll, John
2015-06-01
To systematically review and create nomograms of flaccid and erect penile size measurements. Study key eligibility criteria: measurement of penis size by a health professional using a standard procedure; a minimum of 50 participants per sample. samples with a congenital or acquired penile abnormality, previous surgery, complaint of small penis size or erectile dysfunction. Synthesis methods: calculation of a weighted mean and pooled standard deviation (SD) and simulation of 20,000 observations from the normal distribution to generate nomograms of penis size. Nomograms for flaccid pendulous [n = 10,704, mean (SD) 9.16 (1.57) cm] and stretched length [n = 14,160, mean (SD) 13.24 (1.89) cm], erect length [n = 692, mean (SD) 13.12 (1.66) cm], flaccid circumference [n = 9407, mean (SD) 9.31 (0.90) cm], and erect circumference [n = 381, mean (SD) 11.66 (1.10) cm] were constructed. Consistent and strongest significant correlation was between flaccid stretched or erect length and height, which ranged from r = 0.2 to 0.6. relatively few erect measurements were conducted in a clinical setting and the greatest variability between studies was seen with flaccid stretched length. Penis size nomograms may be useful in clinical and therapeutic settings to counsel men and for academic research. © 2014 The Authors. BJU International © 2014 BJU International.
Huang, Wenxia; Xu, Wangdong; Zhu, Ping; Yang, Hanwei; Su, Linchong; Tang, Huairong; Liu, Yi
2017-12-01
With socioeconomic growth and cultural changes in China, the level of blood glucose may have changed in recent years. This study aims to detect the blood glucose distribution characteristics with a large size of health examination population.A total of 641,311 cases (360,259 males and 281,052 females) more than 18 years old during 2007 to 2015 were recruited from the Health Examination Center at West China hospital, Sichuan University.The percentage of cases with abnormal glucose level and the mean level of glucose were significantly increased since 2007 to 2015 overall. The percentage of cases with abnormal glucose level in males was significantly higher than that in females every year, and the percentage of cases with abnormal glucose level in aged population was higher than the young population. In addition, the mean level of glucose was higher in aged population with normal level of glucose than the young population with normal level of glucose, and the mean level of glucose was higher in males with normal level of glucose than the females with normal level of glucose.The population showed an increased level of blood glucose. Some preventive action may be adopted early and more attention can be paid to them.
Brülle, Tine; Ju, Wenbo; Niedermayr, Philipp; Denisenko, Andrej; Paschos, Odysseas; Schneider, Oliver; Stimming, Ulrich
2011-12-06
Gold nanoparticles were prepared by electrochemical deposition on highly oriented pyrolytic graphite (HOPG) and boron-doped, epitaxial 100-oriented diamond layers. Using a potentiostatic double pulse technique, the average particle size was varied in the range from 5 nm to 30 nm in the case of HOPG as a support and between < 1 nm and 15 nm on diamond surfaces, while keeping the particle density constant. The distribution of particle sizes was very narrow, with standard deviations of around 20% on HOPG and around 30% on diamond. The electrocatalytic activity towards hydrogen evolution and oxygen reduction of these carbon supported gold nanoparticles in dependence of the particle sizes was investigated using cyclic voltammetry. For oxygen reduction the current density normalized to the gold surface (specific current density) increased for decreasing particle size. In contrast, the specific current density of hydrogen evolution showed no dependence on particle size. For both reactions, no effect of the different carbon supports on electrocatalytic activity was observed.
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiotsuka, R.N.; Peck, R.W. Jr.; Drew, R.T.
1985-02-01
A fluidizing bed aerosol generator (FBG), designed for inhalation toxicity studies, was constructed and tested. A key design feature contributing to its operational stability was the partial masking of the screen supporting the bronze beads. This caused 20-80% of the bed to fluidize under normal operating conditions. The non-fluidizing areas functioned as reservoirs to feed the fluidizing areas. Using a bed volume of 1000 cc of bronze beads and 20 g of MnO/sub 2/ dust, the mass output rate ranged from 0.1 to 1.0 mg/min when operated at plenum pressures of 1.04 x 10/sup 2/ to 2.42 x 10/sup 2/more » kPa (minimum fluidization pressure was approximately 82.8 kPa). During daily operation at three different output rates, the FBG produced aerosols with little change in particle size distributions or concentration when operated six hours/day for five days. Furthermore, when the FBG was operated at a fixed output rate for 15 days with two recharges of MnO/sub 2/ dust, the particle size distribution did not show any cumulative increase. Thus, long-term operation of this FBG should result in a reproducible range of concentration and particle size distribution.« less
Dou, Haiyang; Lee, Yong-Ju; Jung, Euo Chang; Lee, Byung-Chul; Lee, Seungho
2013-08-23
In field-flow fractionation (FFF), there is the 'steric transition' phenomenon where the sample elution mode changes from the normal to steric/hyperlayer mode. Accurate analysis by FFF requires understanding of the steric transition phenomenon, particularly when the sample has a broad size distribution, for which the effect by combination of different modes may become complicated to interpret. In this study, the steric transition phenomenon in asymmetrical flow FFF (AF4) was studied using polystyrene (PS) latex beads. The retention ratio (R) gradually decreases as the particle size increases (normal mode) and reaches a minimum (Ri) at diameter around 0.5μm, after which R increases with increasing diameter (steric/hyperlayer mode). It was found that the size-based selectivity (Sd) tends to increase as the channel thickness (w) increases. The retention behavior of cyclo-1,3,5-trimethylene-2,4,6-trinitramine (commonly called 'research department explosive' (RDX)) particles in AF4 was investigated by varying experimental parameters including w and flow rates. AF4 showed a good reproducibility in size determination of RDX particles with the relative standard deviation of 4.1%. The reliability of separation obtained by AF4 was evaluated by transmission electron microscopy (TEM). Copyright © 2013 Elsevier B.V. All rights reserved.
Fractal Structures on Fe3O4 Ferrofluid: A Small-Angle Neutron Scattering Study
NASA Astrophysics Data System (ADS)
Giri Rachman Putra, Edy; Seong, Baek Seok; Shin, Eunjoo; Ikram, Abarrul; Ani, Sistin Ari; Darminto
2010-10-01
A small-angle neutron scattering (SANS) which is a powerful technique to reveal the large scale structures was applied to investigate the fractal structures of water-based Fe3O4ferrofluid, magnetic fluid. The natural magnetite Fe3O4 from iron sand of several rivers in East Java Province of Indonesia was extracted and purified using magnetic separator. Four different ferrofluid concentrations, i.e. 0.5, 1.0, 2.0 and 3.0 Molar (M) were synthesized through a co-precipitation method and then dispersed in tetramethyl ammonium hydroxide (TMAH) as surfactant. The fractal aggregates in ferrofluid samples were observed from their SANS scattering distributions confirming the correlations to their concentrations. The mass fractal dimension changed from about 3 to 2 as ferrofluid concentration increased showing a deviation slope at intermediate scattering vector q range. The size of primary magnetic particle as a building block was determined by fitting the scattering profiles with a log-normal sphere model calculation. The mean average size of those magnetic particles is about 60 - 100 Å in diameter with a particle size distribution σ = 0.5.
Jablonski, Paul D.; Larbalestier, David C.
1993-01-01
Superconductors formed by powder metallurgy have a matrix of niobium-titanium alloy with discrete pinning centers distributed therein which are formed of a compatible metal. The artificial pinning centers in the Nb-Ti matrix are reduced in size by processing steps to sizes on the order of the coherence length, typically in the range of 1 to 10 nm. To produce the superconductor, powders of body centered cubic Nb-Ti alloy and the second phase flux pinning material, such as Nb, are mixed in the desired percentages. The mixture is then isostatically pressed, sintered at a selected temperature and selected time to produce a cohesive structure having desired characteristics without undue chemical reaction, the sintered billet is reduced in size by deformation, such as by swaging, the swaged sample receives heat treatment and recrystallization and additional swaging, if necessary, and is then sheathed in a normal conducting sheath, and the sheathed material is drawn into a wire. The resulting superconducting wire has second phase flux pinning centers distributed therein which provide enhanced J.sub.ct due to the flux pinning effects.
3D brain tumor localization and parameter estimation using thermographic approach on GPU.
Bousselham, Abdelmajid; Bouattane, Omar; Youssfi, Mohamed; Raihani, Abdelhadi
2018-01-01
The aim of this paper is to present a GPU parallel algorithm for brain tumor detection to estimate its size and location from surface temperature distribution obtained by thermography. The normal brain tissue is modeled as a rectangular cube including spherical tumor. The temperature distribution is calculated using forward three dimensional Pennes bioheat transfer equation, it's solved using massively parallel Finite Difference Method (FDM) and implemented on Graphics Processing Unit (GPU). Genetic Algorithm (GA) was used to solve the inverse problem and estimate the tumor size and location by minimizing an objective function involving measured temperature on the surface to those obtained by numerical simulation. The parallel implementation of Finite Difference Method reduces significantly the time of bioheat transfer and greatly accelerates the inverse identification of brain tumor thermophysical and geometrical properties. Experimental results show significant gains in the computational speed on GPU and achieve a speedup of around 41 compared to the CPU. The analysis performance of the estimation based on tumor size inside brain tissue also presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lichtenstein, J. H.
1978-01-01
An analytical method of computing the averaging effect of wing-span size on the loading of a wing induced by random turbulence was adapted for use on a digital electronic computer. The turbulence input was assumed to have a Dryden power spectral density. The computations were made for lift, rolling moment, and bending moment for two span load distributions, rectangular and elliptic. Data are presented to show the wing-span averaging effect for wing-span ratios encompassing current airplane sizes. The rectangular wing-span loading showed a slightly greater averaging effect than did the elliptic loading. In the frequency range most bothersome to airplane passengers, the wing-span averaging effect can reduce the normal lift load, and thus the acceleration, by about 7 percent for a typical medium-sized transport. Some calculations were made to evaluate the effect of using a Von Karman turbulence representation. These results showed that using the Von Karman representation generally resulted in a span averaging effect about 3 percent larger.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki
2014-05-01
A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/µm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log-log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE.
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Dong, Jing; Tearney, Guillermo J.; Pitris, Costas
2018-02-01
Catheter-based Optical Coherence Tomography (OCT) devices allow real-time and comprehensive imaging of the human esophagus. Hence, they provide the potential to overcome some of the limitations of endoscopy and biopsy, allowing earlier diagnosis and better prognosis for esophageal adenocarcinoma patients. However, the large number of images produced during every scan makes manual evaluation of the data exceedingly difficult. In this study, we propose a fully automated tissue characterization algorithm, capable of discriminating normal tissue from Barrett's Esophagus (BE) and dysplasia through entire three-dimensional (3D) data sets, acquired in vivo. The method is based on both the estimation of the scatterer size of the esophageal epithelial cells, using the bandwidth of the correlation of the derivative (COD) method, as well as intensity-based characteristics. The COD method can effectively estimate the scatterer size of the esophageal epithelium cells in good agreement with the literature. As expected, both the mean scatterer size and its standard deviation increase with increasing severity of disease (i.e. from normal to BE to dysplasia). The differences in the distribution of scatterer size for each tissue type are statistically significant, with a p value of < 0.0001. However, the scatterer size by itself cannot be used to accurately classify the various tissues. With the addition of intensity-based statistics the correct classification rates for all three tissue types range from 83 to 100% depending on the lesion size.
Reallocation in modal aerosol models: impacts on predicting aerosol radiative effects
NASA Astrophysics Data System (ADS)
Korhola, T.; Kokkola, H.; Korhonen, H.; Partanen, A.-I.; Laaksonen, A.; Lehtinen, K. E. J.; Romakkaniemi, S.
2013-08-01
In atmospheric modelling applications the aerosol particle size distribution is commonly represented by modal approach, in which particles in different size ranges are described with log-normal modes within predetermined size ranges. Such method includes numerical reallocation of particles from a mode to another for example during particle growth, leading to potentially artificial changes in the aerosol size distribution. In this study we analysed how this reallocation affects climatologically relevant parameters: cloud droplet number concentration, aerosol-cloud interaction coefficient and light extinction coefficient. We compared these parameters between a modal model with and without reallocation routines, and a high resolution sectional model that was considered as a reference model. We analysed the relative differences of the parameters in different experiments that were designed to cover a wide range of dynamic aerosol processes occurring in the atmosphere. According to our results, limiting the allowed size ranges of the modes and the following numerical remapping of the distribution by reallocation, leads on average to underestimation of cloud droplet number concentration (up to 100%) and overestimation of light extinction (up to 20%). The analysis of aerosol first indirect effect is more complicated as the ACI parameter can be either over- or underestimated by the reallocating model, depending on the conditions. However, for example in the case of atmospheric new particle formation events followed by rapid particle growth, the reallocation can cause around average 10% overestimation of the ACI parameter. Thus it is shown that the reallocation affects the ability of a model to estimate aerosol climate effects accurately, and this should be taken into account when using and developing aerosol models.
Using white-light spectroscopy for size determination of tissue phantoms
NASA Astrophysics Data System (ADS)
Vitol, Elina A.; Kurzweg, Timothy P.; Nabet, Bahram
2005-09-01
Along with breast and cervical cancer, esophageal adenocarcinoma is one of the most common types of cancers. The characteristic features of pre-cancerous tissues are the increase in cell proliferation rate and cell nuclei enlargement, which both take place in the epithelium of human body surfaces. However, in the early stages of cancer these changes are very small and difficult to detect, even for expert pathologists. The aim of our research is to develop an optical probe for in vivo detection of nuclear size changes using white light scattering from cell nuclei. The probe will be employed through an endoscope and will be used for the medical examination of the esophagus. The proposed method of examination will be noninvasive, cheap, and specific, compared to a biopsy. Before the construction of this probe, we have developed theory to determine the nuclei size from the reflection data. In this first stage of our research, we compare experimental and theoretical scattered light intensities. Our theoretical model includes the values of scatterer size from which we can extract the nuclei size value. We first performed the study of polystyrene microspheres, acting as a tissue phantom. Spectral and angular distributions of scattered white light from tissue phantoms were studied. Experimental results show significant differences between the spectra of microspheres of different sizes and demonstrate almost linear relation between the number of spectral oscillations and the size of microspheres. Best results were achieved when the scattered light spectrum was collected at 30° to the normal of the sample surface. We present these research results in this paper. In ongoing work, normal and cancerous mammalian cell studies are being performed in order to determine cell nuclei size correlation with the size of microspheres through the light scattering spectrum observation.
Influence of overconsolidated condition on permeability evolution in silica sand
NASA Astrophysics Data System (ADS)
Kimura, S.; Kaneko, H.; Ito, T.; Nishimura, O.; Minagawa, H.
2013-12-01
Permeability of sediments is important factors for production of natural gas from natural gas hydrate bearing layers. Methane-hydrate is regarded as one of the potential resources of natural gas. As results of coring and logging, the existence of a large amount of methane-hydrate is estimated in the Nankai Trough, offshore central Japan, where many folds and faults have been observed. In the present study, we investigate the permeability of silica sand specimen forming the artificial fault zone after large displacement shear in the ring-shear test under two different normal consolidated and overconsolidated conditions. The significant influence of overconsolidation ratio (OCR) on permeability evolution is not found. The permeability reduction is influenced a great deal by the magnitude of normal stress during large displacement shearing. The grain size distribution and structure observation in the shear zone of specimen after shearing at each normal stress level are analyzed by laser scattering type particle analyzer and scanning electron microscope, respectively. It is indicated that the grain size and porosity reduction due to the particle crushing are the factor of the permeability reduction. This study is financially supported by METI and Research Consortium for Methane Hydrate Resources in Japan (the MH21 Research Consortium).
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Anomalous quantum heat transport in a one-dimensional harmonic chain with random couplings.
Yan, Yonghong; Zhao, Hui
2012-07-11
We investigate quantum heat transport in a one-dimensional harmonic system with random couplings. In the presence of randomness, phonon modes may normally be classified as ballistic, diffusive or localized. We show that these modes can roughly be characterized by the local nearest-neighbor level spacing distribution, similarly to their electronic counterparts. We also show that the thermal conductance G(th) through the system decays rapidly with the system size (G(th) ∼ L(-α)). The exponent α strongly depends on the system size and can change from α < 1 to α > 1 with increasing system size, indicating that the system undergoes a transition from a heat conductor to a heat insulator. This result could be useful in thermal control of low-dimensional systems.
Counihan, T.D.; Miller, Allen I.; Parsley, M.J.
1999-01-01
The development of recruitment monitoring programs for age-0 white sturgeons Acipenser transmontanus is complicated by the statistical properties of catch-per-unit-effort (CPUE) data. We found that age-0 CPUE distributions from bottom trawl surveys violated assumptions of statistical procedures based on normal probability theory. Further, no single data transformation uniformly satisfied these assumptions because CPUE distribution properties varied with the sample mean (??(CPUE)). Given these analytic problems, we propose that an additional index of age-0 white sturgeon relative abundance, the proportion of positive tows (Ep), be used to estimate sample sizes before conducting age-0 recruitment surveys and to evaluate statistical hypothesis tests comparing the relative abundance of age-0 white sturgeons among years. Monte Carlo simulations indicated that Ep was consistently more precise than ??(CPUE), and because Ep is binomially rather than normally distributed, surveys can be planned and analyzed without violating the assumptions of procedures based on normal probability theory. However, we show that Ep may underestimate changes in relative abundance at high levels and confound our ability to quantify responses to management actions if relative abundance is consistently high. If data suggest that most samples will contain age-0 white sturgeons, estimators of relative abundance other than Ep should be considered. Because Ep may also obscure correlations to climatic and hydrologic variables if high abundance levels are present in time series data, we recommend ??(CPUE) be used to describe relations to environmental variables. The use of both Ep and ??(CPUE) will facilitate the evaluation of hypothesis tests comparing relative abundance levels and correlations to variables affecting age-0 recruitment. Estimated sample sizes for surveys should therefore be based on detecting predetermined differences in Ep, but data necessary to calculate ??(CPUE) should also be collected.
Coastal Benthic Boundary Layer (CBBL) Research Program
1998-09-01
of gas volume and bubble size distribution on the basis of field seismo-acoustic signature remains . Indirect seismic evidence (large scale) of gas...regime was dominated by reversing tidal currents with typical speeds of 10-cm s -1 or less. Maximum bed shear stresses remained too low to resuspend or...Waals attractive force are assumed to remain unchanged for separations less than the cut-off distance, and (2) the mechanical interparticle normal force
Pierce, Brandon L; Ahsan, Habibul; Vanderweele, Tyler J
2011-06-01
Mendelian Randomization (MR) studies assess the causality of an exposure-disease association using genetic determinants [i.e. instrumental variables (IVs)] of the exposure. Power and IV strength requirements for MR studies using multiple genetic variants have not been explored. We simulated cohort data sets consisting of a normally distributed disease trait, a normally distributed exposure, which affects this trait and a biallelic genetic variant that affects the exposure. We estimated power to detect an effect of exposure on disease for varying allele frequencies, effect sizes and samples sizes (using two-stage least squares regression on 10,000 data sets-Stage 1 is a regression of exposure on the variant. Stage 2 is a regression of disease on the fitted exposure). Similar analyses were conducted using multiple genetic variants (5, 10, 20) as independent or combined IVs. We assessed IV strength using the first-stage F statistic. Simulations of realistic scenarios indicate that MR studies will require large (n > 1000), often very large (n > 10,000), sample sizes. In many cases, so-called 'weak IV' problems arise when using multiple variants as independent IVs (even with as few as five), resulting in biased effect estimates. Combining genetic factors into fewer IVs results in modest power decreases, but alleviates weak IV problems. Ideal methods for combining genetic factors depend upon knowledge of the genetic architecture underlying the exposure. The feasibility of well-powered, unbiased MR studies will depend upon the amount of variance in the exposure that can be explained by known genetic factors and the 'strength' of the IV set derived from these genetic factors.
The effects of surface finish and grain size on the strength of sintered silicon carbide
NASA Technical Reports Server (NTRS)
You, Y. H.; Kim, Y. W.; Lee, J. G.; Kim, C. H.
1985-01-01
The effects of surface treatment and microstructure, especially abnormal grain growth, on the strength of sintered SiC were studied. The surfaces of sintered SiC were treated with 400, 800 and 1200 grit diamond wheels. Grain growth was induced by increasing the sintering times at 2050 C. The beta to alpha transformation occurred during the sintering of beta-phase starting materials and was often accompanied by abnormal grain growth. The overall strength distributions were established using Weibull statistics. The strength of the sintered SiC is limited by extrinsic surface flaws in normal-sintered specimens. The finer the surface finish and grain size, the higher the strength. But the strength of abnormal sintering specimens is limited by the abnormally grown large tabular grains. The Weibull modulus increases with decreasing grain size and decreasing grit size for grinding.
Optimizing Probability of Detection Point Estimate Demonstration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff
2016-09-01
The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved
Prognostic significance of normal-sized ovary in advanced serous epithelial ovarian cancer.
Paik, E Sun; Kim, Ji Hye; Kim, Tae Joong; Lee, Jeong Won; Kim, Byoung Gie; Bae, Duk Soo; Choi, Chel Hun
2018-01-01
We compared survival outcomes of advanced serous type epithelial ovarian cancer (EOC) patients with normal-sized ovaries and enlarged-ovarian tumors by propensity score matching analysis. The medical records of EOC patients treated at Samsung Medical Center between 2002 and 2015 were reviewed retrospectively. We investigated EOC patients with high grade serous type histology and International Federation of Gynecology and Obstetrics (FIGO) stage IIIB, IIIC, or IV who underwent primary debulking surgery (PDS) and adjuvant chemotherapy to identify patients with normal-sized ovaries. Propensity score matching was performed to compare patients with normal-sized ovaries to patients with enlarged-ovarian tumors (ratio, 1:3) according to age, FIGO stage, initial cancer antigen (CA)-125 level, and residual disease status after PDS. Of the 419 EOC patients, 48 patients had normal-sized ovary. Patients with enlarged-ovarian tumor were younger (54.0±10.3 vs. 58.4±9.2 years, p=0.005) than those with normal-sized ovary, and there was a statistically significant difference in residual disease status between the 2 groups. In total cohort with a median follow-up period of 43 months (range, 3-164 months), inferior overall survival (OS) was shown in the normal-sized ovary group (median OS, 71.2 vs. 41.4 months; p=0.003). After propensity score matching, the group with normal-sized ovary showed inferior OS compared to the group with enlarged-ovarian tumor (median OS, 72.1 vs. 41.4 months; p=0.031). In multivariate analysis for OS, normal-sized ovary remained a significant factor. Normal-sized ovary was associated with poor OS compared with the common presentation of enlarged ovaries in EOC, independent of CA-125 level or residual disease. Copyright © 2018. Asian Society of Gynecologic Oncology, Korean Society of Gynecologic Oncology
Analysis of Regolith Simulant Ejecta Distributions from Normal Incident Hypervelocity Impact
NASA Technical Reports Server (NTRS)
Edwards, David L.; Cooke, William; Suggs, Rob; Moser, Danielle E.
2008-01-01
The National Aeronautics and Space Administration (NASA) has established the Constellation Program. The Constellation Program has defined one of its many goals as long-term lunar habitation. Critical to the design of a lunar habitat is an understanding of the lunar surface environment; of specific importance is the primary meteoroid and subsequent ejecta environment. The document, NASA SP-8013 'Meteoroid Environment Model Near Earth to Lunar Surface', was developed for the Apollo program in 1969 and contains the latest definition of the lunar ejecta environment. There is concern that NASA SP-8013 may over-estimate the lunar ejecta environment. NASA's Meteoroid Environment Office (MEO) has initiated several tasks to improve the accuracy of our understanding of the lunar surface ejecta environment. This paper reports the results of experiments on projectile impact into powdered pumice and unconsolidated JSC-1A Lunar Mare Regolith simulant targets. Projectiles were accelerated to velocities between 2.45 and 5.18 km/s at normal incidence using the Ames Vertical Gun Range (AVGR). The ejected particles were detected by thin aluminum foil targets strategically placed around the impact site and angular ejecta distributions were determined. Assumptions were made to support the analysis which include; assuming ejecta spherical symmetry resulting from normal impact and all ejecta particles were of mean target particle size. This analysis produces a hemispherical flux density distribution of ejecta with sufficient velocity to penetrate the aluminum foil detectors.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis
Lin, Johnny; Bentler, Peter M.
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
NASA Astrophysics Data System (ADS)
Zhang, Hui; Li, Zhifang; Li, Hui
2012-12-01
In order to study scattering properties of normal and cancerous tissues from human stomach, we collect images for human gastric specimens by using phase-contrast microscope. The images were processed by the way of mathematics morphology. The equivalent particle size distribution of tissues can be obtained. Combining with Mie scattering theory, the scattering properties of tissues can be calculated. Assume scattering of light in biological tissue can be seen as separate scattering events by different particles, total scattering properties can be equivalent to as scattering sum of particles with different diameters. The results suggest that scattering coefficient of the cancerous tissue is significantly higher than that of normal tissue. The scattering phase function is different especially in the backscattering area. Those are significant clinical benefits to diagnosis cancerous tissue
NASA Astrophysics Data System (ADS)
Flynn, Ryan
2007-12-01
The distribution of biological characteristics such as clonogen density, proliferation, and hypoxia throughout tumors is generally non-uniform, therefore it follows that the optimal dose prescriptions should also be non-uniform and tumor-specific. Advances in intensity modulated x-ray therapy (IMXT) technology have made the delivery of custom-made non-uniform dose distributions possible in practice. Intensity modulated proton therapy (IMPT) has the potential to deliver non-uniform dose distributions as well, while significantly reducing normal tissue and organ at risk dose relative to IMXT. In this work, a specialized treatment planning system was developed for the purpose of optimizing and comparing biologically based IMXT and IMPT plans. The IMXT systems of step-and-shoot (IMXT-SAS) and helical tomotherapy (IMXT-HT) and the IMPT systems of intensity modulated spot scanning (IMPT-SS) and distal gradient tracking (IMPT-DGT), were simulated. A thorough phantom study was conducted in which several subvolumes, which were contained within a base tumor region, were boosted or avoided with IMXT and IMPT. Different boosting situations were simulated by varying the size, proximity, and the doses prescribed to the subvolumes, and the size of the phantom. IMXT and IMPT were also compared for a whole brain radiation therapy (WBRT) case, in which a brain metastasis was simultaneously boosted and the hippocampus was avoided. Finally, IMXT and IMPT dose distributions were compared for the case of non-uniform dose prescription in a head and neck cancer patient that was based on PET imaging with the Cu(II)-diacetyl-bis(N4-methylthiosemicarbazone (Cu-ATSM) hypoxia marker. The non-uniform dose distributions within the tumor region were comparable for IMXT and IMPT. IMPT, however, was capable of delivering the same non-uniform dose distributions within a tumor using a 180° arc as for a full 360° rotation, which resulted in the reduction of normal tissue integral dose by a factor of up to three relative to IMXT, and the complete sparing of organs at risk distal to the tumor region.
Substituting Normal and Waxy-Type Whole Wheat Flour on Dough and Baking Properties
Choi, Induck; Kang, Chun-Sik; Cheong, Young-Keun; Hyun, Jong-Nae; Kim, Kee-Jong
2012-01-01
Normal (cv. Keumkang, KK) and waxy-type (cv. Shinmichal, SMC) whole wheat flour was substituted at 20 and 40% for white wheat flour (WF) during bread dough formulation. The flour blends were subjected to dough and baking property measurement in terms of particle size distribution, dough mixing, bread loaf volume and crumb firmness. The particle size of white wheat flour was the finest, with increasing coarseness as the level of whole wheat flour increased. Substitution of whole wheat flour decreased pasting viscosity, showing all RVA parameters were the lowest in SMC40 composite flour. Water absorption was slightly higher with 40% whole wheat flour regardless of whether the wheat was normal or waxy. An increased mixing time was observed when higher levels of KK flour were substituted, but the opposite reaction occurred when SMC flour was substituted at the same levels. Bread loaf volume was lower in breads containing a whole wheat flour substitution compared to bread containing only white wheat flour. No significant difference in bread loaf volume was observed between normal and waxy whole flour, but the bread crumb firmness was significantly lower in breads containing waxy flour. The results of these studies indicate that up to 40% whole wheat flour substitution could be considered a practical option with respect to functional qualities. Also, replacing waxy whole flour has a positive effect on bread formulation over normal whole wheat flour in terms of improving softness and glutinous texture. PMID:24471084
Biró, L. P.; Kertész, K.; Horváth, E.; Márk, G. I.; Molnár, G.; Vértesy, Z.; Tsai, J.-F.; Kun, A.; Bálint, Zs.; Vigneron, J. P.
2010-01-01
An unusual, intercalated photonic nanoarchitecture was discovered in the elytra of Taiwanese Trigonophorus rothschildi varians beetles. It consists of a multilayer structure intercalated with a random distribution of cylindrical holes normal to the plane of the multilayer. The nanoarchitectures were characterized structurally by scanning electron microscopy and optically by normal incidence, integrated and goniometric reflectance measurements. They exhibit an unsaturated specular and saturated non-specular component of the reflected light. Bioinspired, artificial nanoarchitectures of similar structure and with similar properties were realized by drilling holes of submicron size in a multilayer structure, showing that such photonic nanoarchitectures of biological origin may constitute valuable blueprints for artificial photonic materials. PMID:19933221
Wei, Ta-Chen; Mack, Anne; Chen, Wu; Liu, Jia; Dittmann, Monika; Wang, Xiaoli; Barber, William E
2016-04-01
In recent years, superficially porous particles (SPPs) have drawn great interest because of their special particle characteristics and improvement in separation efficiency. Superficially porous particles are currently manufactured by adding silica nanoparticles onto solid cores using either a multistep multilayer process or one-step coacervation process. The pore size is mainly controlled by the size of the silica nanoparticles and the tortuous pore channel geometry is determined by how those nanoparticles randomly aggregate. Such tortuous pore structure is also similar to that of all totally porous particles used in HPLC today. In this article, we report on the development of a next generation superficially porous particle with a unique pore structure that includes a thinner shell thickness and ordered pore channels oriented normal to the particle surface. The method of making the new superficially porous particles is a process called pseudomorphic transformation (PMT), which is a form of micelle templating. Porosity is no longer controlled by randomly aggregated nanoparticles but rather by micelles that have an ordered liquid crystal structure. The new particle possesses many advantages such as a narrower particle size distribution, thinner porous layer with high surface area and, most importantly, highly ordered, non-tortuous pore channels oriented normal to the particle surface. This PMT process has been applied to make 1.8-5.1μm SPPs with pore size controlled around 75Å and surface area around 100m(2)/g. All particles with different sizes show the same unique pore structure with tunable pore size and shell thickness. The impact of the novel pore structure on the performance of these particles is characterized by measuring van Deemter curves and constructing kinetic plots. Reduced plate heights as low as 1.0 have been achieved on conventional LC instruments. This indicates higher efficiency of such particles compared to conventional totally porous and superficially porous particles. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Mohr, Karen I.; Molinari, John; Thorncroft, Chris D,
2010-01-01
The characteristics of convective system populations in West Africa and the western Pacific tropical cyclone basin were analyzed to investigate whether interannual variability in convective activity in tropical continental and oceanic environments is driven by variations in the number of events during the wet season or by favoring large and/or intense convective systems. Convective systems were defined from TRMM data as a cluster of pixels with an 85 GHz polarization-corrected brightness temperature below 255 K and with an area at least 64 km 2. The study database consisted of convective systems in West Africa from May Sep for 1998-2007 and in the western Pacific from May Nov 1998-2007. Annual cumulative frequency distributions for system minimum brightness temperature and system area were constructed for both regions. For both regions, there were no statistically significant differences among the annual curves for system minimum brightness temperature. There were two groups of system area curves, split by the TRMM altitude boost in 2001. Within each set, there was no statistically significant interannual variability. Sub-setting the database revealed some sensitivity in distribution shape to the size of the sampling area, length of sample period, and climate zone. From a regional perspective, the stability of the cumulative frequency distributions implied that the probability that a convective system would attain a particular size or intensity does not change interannually. Variability in the number of convective events appeared to be more important in determining whether a year is wetter or drier than normal.
Experimental study on infrared radiation temperature field of concrete under uniaxial compression
NASA Astrophysics Data System (ADS)
Lou, Quan; He, Xueqiu
2018-05-01
Infrared thermography, as a nondestructive, non-contact and real-time monitoring method, has great significance in assessing the stability of concrete structure and monitoring its failure. It is necessary to conduct in depth study on the mechanism and application of infrared radiation (IR) of concrete failure under loading. In this paper, the concrete specimens with size of 100 × 100 × 100 mm were adopted to carry out the uniaxial compressions for the IR tests. The distribution of IR temperatures (IRTs), surface topography of IRT field and the reconstructed IR images were studied. The results show that the IRT distribution follows the Gaussian distribution, and the R2 of Gaussian fitting changes along with the loading time. The abnormities of R2 and AE counts display the opposite variation trends. The surface topography of IRT field is similar to the hyperbolic paraboloid, which is related to the stress distribution in the sample. The R2 of hyperbolic paraboloid fitting presents an upward trend prior to the fracture which enables to change the IRT field significantly. This R2 has a sharp drop in response to this large destruction. The normalization images of IRT field, including the row and column normalization images, were proposed as auxiliary means to analyze the IRT field. The row and column normalization images respectively show the transverse and longitudinal distribution of the IRT field, and they have clear responses to the destruction occurring on the sample surface. In this paper, the new methods and quantitative index were proposed for the analysis of IRT field, which have some theoretical and instructive significance for the analysis of the characteristics of IRT field, as well as the monitoring of instability and failure for concrete structure.
Hysteresis in suspended sediment to turbidity relations due to changing particle size distributions
Landers, Mark N.; Sturm, Terry W.
2013-01-01
Turbidity (T) is the most ubiquitous of surrogate technologies used to estimate suspended-sediment concentration (SSC). The effects of sediment size on turbidity are well documented; however, effects from changes in particle size distributions (PSD) are rarely evaluated. Hysteresis in relations of SSC-to-turbidity (SSC~T) for single stormflow events was observed and quantified for a data set of 195 concurrent measurements of SSC, turbidity, discharge, velocity, and volumetric PSD collected during five stormflows in 2009–2010 on Yellow River at Gees Mill Road in metropolitan Atlanta, Georgia. Regressions of SSC-normalized turbidity (T/SSC) on concurrently measured PSD percentiles show an inverse, exponential influence of particle size on turbidity that is not constant across the size range of the PSD. The majority of the influence of PSD on T/SSC is from particles of fine-silt and smaller sizes (finer than 16 microns). This study shows that small changes in the often assumed stability of the PSD are significant to SSC~T relations. Changes of only 5 microns in the fine silt and smaller size fractions of suspended sediment PSD can produce hysteresis in the SSC~T rating that can increase error and produce bias. Observed SSC~T hysteresis may be an indicator of changes in sediment properties during stormflows and of potential changes in sediment sources. Trends in the PSD time series indicate that sediment transport is capacity-limited for sand-sized sediment in the channel and supply-limited for fine silt and smaller sediment from the hillslope.
Zhou, Zhengzhen; Guo, Laodong
2015-06-19
Colloidal retention characteristics, recovery and size distribution of model macromolecules and natural dissolved organic matter (DOM) were systematically examined using an asymmetrical flow field-flow fractionation (AFlFFF) system under various membrane size cutoffs and carrier solutions. Polystyrene sulfonate (PSS) standards with known molecular weights (MW) were used to determine their permeation and recovery rates by membranes with different nominal MW cutoffs (NMWCO) within the AFlFFF system. Based on a ≥90% recovery rate for PSS standards by the AFlFFF system, the actual NMWCOs were determined to be 1.9 kDa for the 0.3 kDa membrane, 2.7 kDa for the 1 kDa membrane, and 33 kDa for the 10 kDa membrane, respectively. After membrane calibration, natural DOM samples were analyzed with the AFlFFF system to determine their colloidal size distribution and the influence from membrane NMWCOs and carrier solutions. Size partitioning of DOM samples showed a predominant colloidal size fraction in the <5 nm or <10 kDa size range, consistent with the size characteristics of humic substances as the main terrestrial DOM component. Recovery of DOM by the AFlFFF system, as determined by UV-absorbance at 254 nm, decreased significantly with increasing membrane NMWCO, from 45% by the 0.3 kDa membrane to 2-3% by the 10 kDa membrane. Since natural DOM is mostly composed of lower MW substances (<10 kDa) and the actual membrane cutoffs are normally larger than their manufacturer ratings, a 0.3 kDa membrane (with an actual NMWCO of 1.9 kDa) is highly recommended for colloidal size characterization of natural DOM. Among the three carrier solutions, borate buffer seemed to provide the highest recovery and optimal separation of DOM. Rigorous calibration with macromolecular standards and optimization of system conditions are a prerequisite for quantifying colloidal size distribution using the flow field-flow fractionation technique. In addition, the coupling of AFlFFF with fluorescence EEMs could provide new insights into DOM heterogeneity in different colloidal size fractions. Copyright © 2015 Elsevier B.V. All rights reserved.
Yuasa, H; Nakano, T; Kanaya, Y
1999-02-01
It has been reported that the degree of particle agglomeration in fluidized bed coating is greatly affected by the spray mist size of coating solution. However, the mist size has generally been measured in open air, and few reports have described the measurement of the mist size in a chamber of the fluidized bed, in which actual coating is carried out. Therefore, using hydroxypropylmethyl cellulose (HPMC) aqueous solution as a coating solution, the spray mist size of the coating solution in a chamber of the fluidized bed was measured under various coating conditions, such as the distance from the spray nozzle, fluidization air volume, inlet air temperature and addition of sodium chloride (NaCl) into the coating solution. The mist size in the fluidized bed was compared with that in open air at various distances from the spray nozzle. Further, the relationship between the spray mist size and the degree of suppression of agglomeration at various NaCl concentrations during fluidized bed coating was studied. The mist size distribution showed a logarithmic normal distribution in both cases of the fluidized bed and open air. The number-basis median diameter of spray mist (D50) in the fluidized bed was smaller compared with that in open air. D50 increased with the increasing distance from the spray nozzle in both cases. In the fluidized bed, D50 decreased with the increasing fluidization air volume and inlet air temperature. The effect of NaCl concentration on the mist size was hardly observed, but the degree of suppression of agglomeration during coating increased with the increasing NaCl concentration in the coating solution.
Percolation Pore Network Study on the Residue Gas Saturation of Dry Reservoir Rocks
NASA Astrophysics Data System (ADS)
Cheng, T.; Tang, Y. B.; Zou, G. Y.; Jiang, K.; Li, M.
2014-12-01
We tried to model the effect of pore size heterogeneity and pore connectivity on the residue gas saturation for dry gas reservoir rocks. If we consider that snap-off does not exist and only piston displacement takes place in all pores with the same size during imbibition process, in the extreme case, the residue gas saturation will be equal to zero. Thus we can suppose that the residue gas saturation of dry rocks is mainly controlled by the pore size distribution. To verify the assumption, percolation pore networks (i.e., three-dimensional simple cubic (SC) and body-center cubic (BCC)) were used in the study. The connectivity and the pore size distribution in percolation pore network could be changed randomly. The concept of water phase connectivity zw(i.e., water coordination number) and gas phase connectivity zg (i.e., gas coordination number) was introduced here. zw and zg will change during simulation and can be estimated numerically from the results of simulations through gradually saturated networks by water. The Simulation results show that when zg less than or equal to 1.5 during water quasi - static imbibition, the gas will be trapped in rock pores. Network simulation results also shows that the residue gas saturation Srg follows a power law relationship (i.e.,Srg∝σrα, where σr is normalized standard deviation of the pore radius distribution, and exponent α is a function of coordination number). This indicates that the residue gas saturation has no explicit relationship with porosity and permeability as it should have in light of previous study, pore radius distribution is the principal factor in determining the residue gas saturation of dry reservoir rocks.
STUDIES OF MEIOSIS IN LUZULA PURPUREA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordenskiold, H.
1962-01-01
In Luzula purpurea, which has diffuse centromeres and only six chromosomes, a study was made of the separation of chromatids during the first meiotic division, the pairing of the free chromitids at interkinesis, and the chromosomes of first mitosis of the pollen tetrads. Two strains were used, the normal type of L. purpurea with 2n = 6, and certain plants selected among the x- irradiated survivors. The normal plants had six somatic chromosomes of equal size that could not be distinguished from each other. The x-irradiated plants originated from material treated with 2500 or 1000 r as seedlings. Mitotic chromosomemore » patterns were examined in progenies of the treated plants. Several of the irradiated plants were found to have 2n = 7 with five normal-sized chromosomes and one chromosome fragmented into two pieces of about equal size. Pairing of chromosomes during meiosis in these irradiated plants was compared with that in the normal L. purpurea with 2n = 6, and the distribution of large and small chromosomes between the four cells of the pollen tetrads was examined. Five of the original six chromosomes were unaffected by the x-ray treatment. A study of meiosis verified the postulated type of fragmentation of the 6th one. One heteromorphic association is formed in each cell of meiosis, originating from the pairing between the two fragments and their homologous unbroken partner. The association is open at metaphase and separates equationally during first anaphase. In the tetrad the two fragments regularly substitute the broken chromosome. The plants behave cytologically in the same way as the hybrids between diploid strains and naturally occurring endonuclear polyploids with half-sized chromosomes. In the next generation plants homozygote for the fragmented chromosome were found, showing a regular meiosis with two large and two small bivalents. The origin of the single fragmented chromosome in the irradiated material is difficult to explain; however, it was found earlier that chromosomes broken by x rays may persist as fragments in L. purpurea. It is noteworthy that the result of the fragmentation corresponds to the naturally occurring changes of chromosome pattern found in the genus Luzula. The pairing behavior of the chromosomes during meiosis in the heterozygote is the same as the one described for corresponding hybrids of L. campestris as well as the distribution of the large and the small chromosomes between the tetrad cells, where two half-sized chromosomes always substitute one large one. In such a case the progeny plants, homozygous for the fragmented chromosome, can give rise to a population which may be considered as an artificially produced endonuclear aneuploid strain, similar to the ones naturally found in the genus Luzula. (BBB)« less
Rodríguez, José-Rodrigo; DeFelipe, Javier
2018-01-01
Abstract Changes in the size of the synaptic junction are thought to have significant functional consequences. We used focused ion beam milling and scanning electron microscopy (FIB/SEM) to obtain stacks of serial sections from the six layers of the rat somatosensory cortex. We have segmented in 3D a large number of synapses (n = 6891) to analyze the size and shape of excitatory (asymmetric) and inhibitory (symmetric) synapses, using dedicated software. This study provided three main findings. Firstly, the mean synaptic sizes were smaller for asymmetric than for symmetric synapses in all cortical layers. In all cases, synaptic junction sizes followed a log-normal distribution. Secondly, most cortical synapses had disc-shaped postsynaptic densities (PSDs; 93%). A few were perforated (4.5%), while a smaller proportion (2.5%) showed a tortuous horseshoe-shaped perimeter. Thirdly, the curvature was larger for symmetric than for asymmetric synapses in all layers. However, there was no correlation between synaptic area and curvature. PMID:29387782
Santuy, Andrea; Rodríguez, José-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Angel
2018-01-01
Changes in the size of the synaptic junction are thought to have significant functional consequences. We used focused ion beam milling and scanning electron microscopy (FIB/SEM) to obtain stacks of serial sections from the six layers of the rat somatosensory cortex. We have segmented in 3D a large number of synapses ( n = 6891) to analyze the size and shape of excitatory (asymmetric) and inhibitory (symmetric) synapses, using dedicated software. This study provided three main findings. Firstly, the mean synaptic sizes were smaller for asymmetric than for symmetric synapses in all cortical layers. In all cases, synaptic junction sizes followed a log-normal distribution. Secondly, most cortical synapses had disc-shaped postsynaptic densities (PSDs; 93%). A few were perforated (4.5%), while a smaller proportion (2.5%) showed a tortuous horseshoe-shaped perimeter. Thirdly, the curvature was larger for symmetric than for asymmetric synapses in all layers. However, there was no correlation between synaptic area and curvature.
Sample size for estimating mean and coefficient of variation in species of crotalarias.
Toebe, Marcos; Machado, Letícia N; Tartaglia, Francieli L; Carvalho, Juliana O DE; Bandeira, Cirineu T; Cargnelutti Filho, Alberto
2018-04-16
The objective of this study was to determine the sample size necessary to estimate the mean and coefficient of variation in four species of crotalarias (C. juncea, C. spectabilis, C. breviflora and C. ochroleuca). An experiment was carried out for each species during the season 2014/15. At harvest, 1,000 pods of each species were randomly collected. In each pod were measured: mass of pod with and without seeds, length, width and height of pods, number and mass of seeds per pod, and mass of hundred seeds. Measures of central tendency, variability and distribution were calculated, and the normality was verified. The sample size necessary to estimate the mean and coefficient of variation with amplitudes of the confidence interval of 95% (ACI95%) of 2%, 4%, ..., 20% was determined by resampling with replacement. The sample size varies among species and characters, being necessary a larger sample size to estimate the mean in relation of the necessary for the coefficient of variation.
Numerical Simulation of Dry Granular Flow Impacting a Rigid Wall Using the Discrete Element Method
Wu, Fengyuan; Fan, Yunyun; Liang, Li; Wang, Chao
2016-01-01
This paper presents a clump model based on Discrete Element Method. The clump model was more close to the real particle than a spherical particle. Numerical simulations of several tests of dry granular flow impacting a rigid wall flowing in an inclined chute have been achieved. Five clump models with different sphericity have been used in the simulations. By comparing the simulation results with the experimental results of normal force on the rigid wall, a clump model with better sphericity was selected to complete the following numerical simulation analysis and discussion. The calculation results of normal force showed good agreement with the experimental results, which verify the effectiveness of the clump model. Then, total normal force and bending moment of the rigid wall and motion process of the granular flow were further analyzed. Finally, comparison analysis of the numerical simulations using the clump model with different grain composition was obtained. By observing normal force on the rigid wall and distribution of particle size at the front of the rigid wall at the final state, the effect of grain composition on the force of the rigid wall has been revealed. It mainly showed that, with the increase of the particle size, the peak force at the retaining wall also increase. The result can provide a basis for the research of relevant disaster and the design of protective structures. PMID:27513661
NASA Technical Reports Server (NTRS)
Smith, O. E.
1976-01-01
The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.
Yamada, Shigeki; Ishikawa, Masatsune; Iwamuro, Yasushi; Yamamoto, Kazuo
2016-01-01
To clarify the pathogenesis of two different types of adult-onset normal-pressure hydrocephalus (NPH), we investigated cerebrospinal fluid distribution on the high-field three-dimensional MRI. The subarachnoid spaces in secondary NPH were smaller than those in the controls, whereas those in idiopathic NPH were of similar size to the controls. In idiopathic NPH, however, the basal cistern and Sylvian fissure were enlarged in concurrence with ventricular enlargement towards the z-direction, but the convexity subarachnoid space was severely diminished. In this article, we provide evidence that the key cause of the disproportionate cerebrospinal fluid distribution in idiopathic NPH is the compensatory direct CSF communication between the inferior horn of the lateral ventricles and the ambient cistern at the choroidal fissure. In contrast, all parts of the subarachnoid spaces were equally and severely decreased in secondary NPH. Blockage of CSF drainage from the subarachnoid spaces could cause the omnidirectional ventricular enlargement in secondary NPH. PMID:27941913
Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Banerjee, Anjishnu
2015-01-01
Derive lower leg injury risk functions using survival analysis and determine injury reference values (IRV) applicable to human mid-size male and small-size female anthropometries by conducting a meta-analysis of experimental data from different studies under axial impact loading to the foot-ankle-leg complex. Specimen-specific dynamic peak force, age, total body mass, and injury data were obtained from tests conducted by applying the external load to the dorsal surface of the foot of postmortem human subject (PMHS) foot-ankle-leg preparations. Calcaneus and/or tibia injuries, alone or in combination and with/without involvement of adjacent articular complexes, were included in the injury group. Injury and noninjury tests were included. Maximum axial loads recorded by a load cell attached to the proximal end of the preparation were used. Data were analyzed by treating force as the primary variable. Age was considered as the covariate. Data were censored based on the number of tests conducted on each specimen and whether it remained intact or sustained injury; that is, right, left, and interval censoring. The best fits from different distributions were based on the Akaike information criterion; mean and plus and minus 95% confidence intervals were obtained; and normalized confidence interval sizes (quality indices) were determined at 5, 10, 25, and 50% risk levels. The normalization was based on the mean curve. Using human-equivalent age as 45 years, data were normalized and risk curves were developed for the 50th and 5th percentile human size of the dummies. Out of the available 114 tests (76 fracture and 38 no injury) from 5 groups of experiments, survival analysis was carried out using 3 groups consisting of 62 tests (35 fracture and 27 no injury). Peak forces associated with 4 specific risk levels at 25, 45, and 65 years of age are given along with probability curves (mean and plus and minus 95% confidence intervals) for PMHS and normalized data applicable to male and female dummies. Quality indices increased (less tightness-of-fit) with decreasing age and risk level for all age groups and these data are given for all chosen risk levels. These PMHS-based probability distributions at different ages using information from different groups of researchers constituting the largest body of data can be used as human tolerances to lower leg injury from axial loading. Decreasing quality indices (increasing index value) at lower probabilities suggest the need for additional tests. The anthropometry-specific mid-size male and small-size female mean human risk curves along with plus and minus 95% confidence intervals from survival analysis and associated IRV data can be used as a first step in studies aimed at advancing occupant safety in automotive and other environments.
Analysis of quantitative data obtained from toxicity studies showing non-normal distribution.
Kobayashi, Katsumi
2005-05-01
The data obtained from toxicity studies are examined for homogeneity of variance, but, usually, they are not examined for normal distribution. In this study I examined the measured items of a carcinogenicity/chronic toxicity study with rats for both homogeneity of variance and normal distribution. It was observed that a lot of hematology and biochemistry items showed non-normal distribution. For testing normal distribution of the data obtained from toxicity studies, the data of the concurrent control group may be examined, and for the data that show a non-normal distribution, non-parametric tests with robustness may be applied.
On the efficacy of procedures to normalize Ex-Gaussian distributions.
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2014-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.
Percent area coverage through image analysis
NASA Astrophysics Data System (ADS)
Wong, Chung M.; Hong, Sung M.; Liu, De-Ling
2016-09-01
The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.
Sonographic evaluation of polycystic ovaries.
Zhu, Ruo-Yan; Wong, Yee-Chee; Yong, Eu-Leong
2016-11-01
The morphological features of the ovaries in women with polycystic ovary syndrome (PCOS) have been well described by ultrasound imaging technology. These include enlarged ovary size, multiple small follicles of similar size, increased ovarian stromal volume and echogenicity, peripheral distribution of the follicles, and higher stromal blood flow. Ultrasound identification of the presence of polycystic ovarian morphology (PCOM) has been recognized as a component of PCOS diagnosis. With the advance of ultrasound technology, new definition has been proposed recently. There is, however, a paucity of data for the ovarian morphology in normal and PCOS adolescents. Magnetic resonance imaging has the potential to be an alternative imaging modality for diagnosing PCOM in adolescence. Copyright © 2016. Published by Elsevier Ltd.
Synthesis of zinc ultrafine powders via the Guen–Miller flow-levitation method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jigatch, A. N., E-mail: jan@chph.ras.ru; Leipunskii, I. O.; Kuskov, M. L.
2015-12-15
Zinc ultrafine powders (UFPs) with the average particle size of 0.175 to 1.24 μm are synthesized via the flow-levitation method. The peculiarities of the formation of zinc UFPs are considered with respect to the carrier gas properties (heat capacity, thermal conductivity, and diffusion coefficient), as well as the gas flow parameters (pressure and flow rate). The obtained zinc particles are studied via scanning electron microscopy and X-ray diffraction. The factors determining the crystal structure of zinc particles and their size distribution are discussed as well. The data on oxidation of zinc stored in unsealed containers under normal conditions are alsomore » presented.« less
Cressoni, Massimo; Chiumello, Davide; Chiurazzi, Chiara; Brioni, Matteo; Algieri, Ilaria; Gotti, Miriam; Nikolla, Klodiana; Massari, Dario; Cammaroto, Antonio; Colombo, Andrea; Cadringher, Paolo; Carlesso, Eleonora; Benti, Riccardo; Casati, Rosangela; Zito, Felicia; Gattinoni, Luciano
2016-01-01
The aim of the study was to determine the size and location of homogeneous inflamed/noninflamed and inhomogeneous inflamed/noninflamed lung compartments and their association with acute respiratory distress syndrome (ARDS) severity.In total, 20 ARDS patients underwent 5 and 45 cmH2O computed tomography (CT) scans to measure lung recruitability. [(18)F]2-fluoro-2-deoxy-d-glucose ([(18)F]FDG) uptake and lung inhomogeneities were quantified with a positron emission tomography-CT scan at 10 cmH2O. We defined four compartments with normal/abnormal [(18)F]FDG uptake and lung homogeneity.The homogeneous compartment with normal [(18)F]FDG uptake was primarily composed of well-inflated tissue (80±16%), double-sized in nondependent lung (32±27% versus 16±17%, p<0.0001) and decreased in size from mild, moderate to severe ARDS (33±14%, 26±20% and 5±9% of the total lung volume, respectively, p=0.05). The homogeneous compartment with high [(18)F]FDG uptake was similarly distributed between the dependent and nondependent lung. The inhomogeneous compartment with normal [(18)F]FDG uptake represented 4% of the lung volume. The inhomogeneous compartment with high [(18)F]FDG uptake was preferentially located in the dependent lung (21±10% versus 12±10%, p<0.0001), mostly at the open/closed interfaces and related to recruitability (r(2)=0.53, p<0.001).The homogeneous lung compartment with normal inflation and [(18)F]FDG uptake decreases with ARDS severity, while the inhomogeneous poorly/not inflated compartment increases. Most of the lung inhomogeneities are inflamed. A minor fraction of healthy tissue remains in severe ARDS. Copyright ©ERS 2016.
NASA Astrophysics Data System (ADS)
Campbell, Kirby R.; Tilbury, Karissa B.; Campagnola, Paul J.
2015-03-01
Here, we examine ovarian cancer extracellular matrix (ECM) modification by measuring the wavelength dependence of optical scattering measurements and quantitative second-harmonic generation (SHG) imaging metrics in the range of 800-1100 nm in order to determine fibrillary changes in ex vivo normal ovary, type I, and type II ovarian cancer. Mass fractals of the collagen fiber structure is analyzed based on a power law correlation function using spectral dependence measurements of the reduced scattering coefficient μs' where the mass fractal dimension is related to the power. Values of μs' are measured using independent methods of determining the values of μs and g by on-axis attenuation measurements using the Beer-Lambert Law and by fitting the angular distribution of scattering to the Henyey-Greenstein phase function, respectively. Quantitativespectral SHG imaging on the same tissues determines FSHG/BSHG creation ratios related to size and harmonophore distributions. Both techniques probe fibril packing order, but the optical scattering probes structures of sizes from about 50-2000 nm where SHG imaging - although only able to resolve individual fibers - builds contrast from the assembly of fibrils. Our findings suggest that type I ovarian tumor structure has the most ordered collagen fibers followed by normal ovary then type II tumors showing the least order.
Abnormal grain growth in AISI 304L stainless steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirdel, M., E-mail: mshirdel1989@ut.ac.ir; Mirzadeh, H., E-mail: hmirzadeh@ut.ac.ir; Advanced Metalforming and Thermomechanical Processing Laboratory, School of Metallurgy and Materials Engineering, University of Tehran, Tehran
2014-11-15
The microstructural evolution during abnormal grain growth (secondary recrystallization) in 304L stainless steel was studied in a wide range of annealing temperatures and times. At relatively low temperatures, the grain growth mode was identified as normal. However, at homologous temperatures between 0.65 (850 °C) and 0.7 (900 °C), the observed transition in grain growth mode from normal to abnormal, which was also evident from the bimodality in grain size distribution histograms, was detected to be caused by the dissolution/coarsening of carbides. The microstructural features such as dispersed carbides were characterized by optical metallography, X-ray diffraction, scanning electron microscopy, energy dispersivemore » X-ray analysis, and microhardness. Continued annealing to a long time led to the completion of secondary recrystallization and the subsequent reappearance of normal growth mode. Another instance of abnormal grain growth was observed at homologous temperatures higher than 0.8, which may be attributed to the grain boundary faceting/defaceting phenomenon. It was also found that when the size of abnormal grains reached a critical value, their size will not change too much and the grain growth behavior becomes practically stagnant. - Highlights: • Abnormal grain growth (secondary recrystallization) in AISI 304L stainless steel • Exaggerated grain growth due to dissolution/coarsening of carbides • The enrichment of carbide particles by titanium • Abnormal grain growth due to grain boundary faceting at very high temperatures • The stagnancy of abnormal grain growth by annealing beyond a critical time.« less
Rock-avalanche Deposits Record Quantitative Information On Internal Deformation During Runout
NASA Astrophysics Data System (ADS)
McSaveney, M. J.; Zhang, M.
2016-12-01
The rock avalanche deposit at Wenjiagou Creek, China, shows grain-size changes with distance from source and with depth below the surface. To see what quantitative information on internal deformation might be able to be inferred from such information, we conducted a series of laboratory tests using a conventional ring-shear apparatus (Torshear Model 27-WF2202) at GNS Science, Lower Hutt, NZ. Lacking ready access to the limestone of the Wenjiagou Creek deposit, we used locally sourced 0.5-2 mm sand sieved from the greywacke-derived gravel bed of the Hutt River. To keep within the reliable operating limits of the apparatus, we conducted 38 dry tests using the combinations of normal stress, shear rate and shear displacement listed in Table 1. Size distributions were determined over the range 0.1 - 2000 µm using a laser sizer. Results showed that the number of grain breakages increased systematically with increasing normal stress and shear displacement, while shear rate had no significant influence. We concluded that if calibrated using appropriate materials, we would be able to quantify amounts of internal shear deformation in a rock avalanche by analysis of grain-size variations in the deposit. Table 1 Ring-shear test program Normal stress (kPa) Shear rate (mm/min) Shear displacement (mm) 200 100 74.2 37.1 0 100 200 500 1000 3000 400 100 74.2 37.1 0 100 200 500 1000 600 100 74.2 0 100 200 500 1000
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality.
Bishara, Anthony J; Hittner, James B
2015-10-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box-Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples ( n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated.
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality
Hittner, James B.
2014-01-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box–Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples (n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated. PMID:29795841
NASA Astrophysics Data System (ADS)
Borhan, M. Z.; Ahmad, R.; Rusop, M.; Abdullah, S.
2012-11-01
Centella Asiatica (C. Asiatica)contains asiaticoside as bioactive constituent which can be potentially used in skin healing process. Unfortunately, the normal powders are difficult to be absorbed by the body effectively. In order to improve the value of use, nano C. Asiatica powder was prepared. The influence of milling time was carried out at 0.5, 2, 4, 6, 8 hours and 10 hours. The effect of ball milling at different times was characterized using particles size analysis and FTIR Spectroscopy. The fineness of ground product was evaluated by recording the z-Average (nm), undersize distribution and polydispersity index (PdI). The results show that the smallest size particles by mean is 233 nm while FTIR spectra shows that there is no changing in the major component in the C. Asiatica powders with milling time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barron, L.H.; Rae, A.; Brock, D.J.H.
1994-09-01
The CCG rich sequence immediately 3{prime} to the CAG repeat that is expanded in Huntington`s disease (HD) has recently been shown to be polymorphic with at least 5 alleles differing by multiples of 3 bp being found in the normal population. We have studied the allele distribution in 200 Scottish HD families and have found very strong evidence for almost complete disequilibrium in this population. For all the families phase was unambiguously determined and 196 were shown to have a CCG repeat allele of 176 bp cosegregating with the HD chromosome. This observation is significantly different to the normal populationmore » distribution where 31% of people have an allele of 185 bp. This overrepresentation of the 176 bp allele is also seen in the normal population on chromosomes with greater than 26 CAG repeats. The DNA sequence across the CAG and CCG repeats has been obtained for the four HD patients that do not have a 176 bp CCG repeat size and will be presented. We present strong evidence of genetic heterogeneity in the Scottish HD population making it very unlikely that there is a founder effect in the Scottish HD population. These data suggest that we may have identified a region of the IT15 gene that is critical in the mechanism of Huntington`s disease CAG expansion.« less
Nune, K C; Kumar, A; Misra, R D K; Li, S J; Hao, Y L; Yang, R
2017-02-01
We elucidate here the osteoblasts functions and cellular activity in 3D printed interconnected porous architecture of functionally gradient Ti-6Al-4V alloy mesh structures in terms of cell proliferation and growth, distribution of cell nuclei, synthesis of proteins (actin, vinculin, and fibronectin), and calcium deposition. Cell culture studies with pre-osteoblasts indicated that the interconnected porous architecture of functionally gradient mesh arrays was conducive to osteoblast functions. However, there were statistically significant differences in the cellular response depending on the pore size in the functionally gradient structure. The interconnected porous architecture contributed to the distribution of cells from the large pore size (G1) to the small pore size (G3), with consequent synthesis of extracellular matrix and calcium precipitation. The gradient mesh structure significantly impacted cell adhesion and influenced the proliferation stage, such that there was high distribution of cells on struts of the gradient mesh structure. Actin and vinculin showed a significant difference in normalized expression level of protein per cell, which was absent in the case of fibronectin. Osteoblasts present on mesh struts formed a confluent sheet, bridging the pores through numerous cytoplasmic extensions. The gradient mesh structure fabricated by electron beam melting was explored to obtain fundamental insights on cellular activity with respect to osteoblast functions. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Cutten, D. R.; Pueschel, R. F.; Srivastava, V.; Clarke, A. D.; Rothermel, J.; Spinhirne, J. D.; Menzies, R. T.
1996-01-01
Aerosol concentrations and size distributions in the middle and upper troposphere over the remote Pacific Ocean were measured with a forward scattering spectrometer probe (FSSP) on the NASA DC-8 aircraft during NASA's Global Backscatter Experiment (GLOBE) in May-June 1990. The FSSP size channels were recalibrated based on refractive index estimates from flight-level aerosol volatility measurements with a collocated laser optical particle counter (LOPC). The recalibrated FSSP size distributions were averaged over 100-s intervals, fitted with lo-normal distributions and used to calculate aerosol backscatter coefficients at selected wavelengths. The FSSP-derived backscatter estimates were averaged over 300-s intervals to reduce large random fluctuations. The smoothed FSSP aerosol backscatter coefficients were then compared with LOPC-derived backscatter values and with backscatter measured at or near flight level from four lidar systems operating at 0.53, 1.06, 9.11, 9.25, and 10.59 micrometers. Agreement between FSSP-derived and lidar-measured backscatter was generally best at flight level in homogeneous aerosol fields and at high backscatter values. FSSP data often underestimated low backscatter values especially at the longer wavelengths due to poor counting statistics for larger particles (greater than 0.8 micrometers diameter) that usually dominate aerosol backscatter at these wavelengths. FSSP data also underestimated backscatter at shorter wavelengths when particles smaller than the FSSP lower cutoff diameter (0.35 micrometers) made significant contributions to the total backscatter.
Wang, Ying-Fang; Tsai, Perng-Jy; Chen, Chun-Wan; Chen, Da-Ren; Dai, Yu-Tung
2011-12-30
The aims of the present study were set out to measure size distributions and estimate workers' exposure concentrations of oil mist nanoparticles in three selected workplaces of the forming, threading, and heat treating areas in a fastener manufacturing plant by using a modified electrical aerosol detector (MEAD). The results were further compared with those simultaneously obtained from a nanoparticle surface area monitor (NSAM) and a scanning mobility particle sizer (SMPS) for the validation purpose. Results show that oil mist nanoparticles in the three selected process areas were formed mainly through the evaporation and condensation processes. The measured size distributions of nanoparticles were consistently in the form of uni-modal. The estimated fraction of nanoparticles deposited on the alveolar (AV) region was consistently much higher than that on the head airway (HD) and tracheobronchial (TB) regions in both number and surface area concentration bases. However, a significant difference was found in the estimated fraction of nanoparticles deposited on each individual region while different exposure metrics were used. Comparable results were found between results obtained from both NSAM and MEAD. After normalization, no significant difference can be found between the results obtained from SMPS and MEAD. It is concluded that the obtained MEAD results are suitable for assessing oil mist nanoparticle exposures. Copyright © 2011 Elsevier B.V. All rights reserved.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Adding muscle where you need it: non-uniform hypertrophy patterns in elite sprinters.
Handsfield, G G; Knaus, K R; Fiorentino, N M; Meyer, C H; Hart, J M; Blemker, S S
2017-10-01
Sprint runners achieve much higher gait velocities and accelerations than average humans, due in part to large forces generated by their lower limb muscles. Various factors have been explored in the past to understand sprint biomechanics, but the distribution of muscle volumes in the lower limb has not been investigated in elite sprinters. In this study, we used non-Cartesian MRI to determine muscle sizes in vivo in a group of 15 NCAA Division I sprinters. Normalizing muscle sizes by body size, we compared sprinter muscles to non-sprinter muscles, calculated Z-scores to determine non-uniformly large muscles in sprinters, assessed bilateral symmetry, and assessed gender differences in sprinters' muscles. While limb musculature per height-mass was 22% greater in sprinters than in non-sprinters, individual muscles were not all uniformly larger. Hip- and knee-crossing muscles were significantly larger among sprinters (mean difference: 30%, range: 19-54%) but only one ankle-crossing muscle was significantly larger (tibialis posterior, 28%). Population-wide asymmetry was not significant in the sprint population but individual muscle asymmetries exceeded 15%. Gender differences in normalized muscle sizes were not significant. The results of this study suggest that non-uniform hypertrophy patterns, particularly large hip and knee flexors and extensors, are advantageous for fast sprinting. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Nerantzaki, Sofia; Papalexiou, Simon Michael
2017-04-01
Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.
The Structure of the Young Star Cluster NGC 6231. II. Structure, Formation, and Fate
NASA Astrophysics Data System (ADS)
Kuhn, Michael A.; Getman, Konstantin V.; Feigelson, Eric D.; Sills, Alison; Gromadzki, Mariusz; Medina, Nicolás; Borissova, Jordanka; Kurtev, Radostin
2017-12-01
The young cluster NGC 6231 (stellar ages ˜2-7 Myr) is observed shortly after star formation activity has ceased. Using the catalog of 2148 probable cluster members obtained from Chandra, VVV, and optical surveys (Paper I), we examine the cluster’s spatial structure and dynamical state. The spatial distribution of stars is remarkably well fit by an isothermal sphere with moderate elongation, while other commonly used models like Plummer spheres, multivariate normal distributions, or power-law models are poor fits. The cluster has a core radius of 1.2 ± 0.1 pc and a central density of ˜200 stars pc-3. The distribution of stars is mildly mass segregated. However, there is no radial stratification of the stars by age. Although most of the stars belong to a single cluster, a small subcluster of stars is found superimposed on the main cluster, and there are clumpy non-isotropic distributions of stars outside ˜4 core radii. When the size, mass, and age of NGC 6231 are compared to other young star clusters and subclusters in nearby active star-forming regions, it lies at the high-mass end of the distribution but along the same trend line. This could result from similar formation processes, possibly hierarchical cluster assembly. We argue that NGC 6231 has expanded from its initial size but that it remains gravitationally bound.
Crack surface roughness in three-dimensional random fuse networks
NASA Astrophysics Data System (ADS)
Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan
2006-08-01
Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.
Are there laws of genome evolution?
Koonin, Eugene V
2011-08-01
Research in quantitative evolutionary genomics and systems biology led to the discovery of several universal regularities connecting genomic and molecular phenomic variables. These universals include the log-normal distribution of the evolutionary rates of orthologous genes; the power law-like distributions of paralogous family size and node degree in various biological networks; the negative correlation between a gene's sequence evolution rate and expression level; and differential scaling of functional classes of genes with genome size. The universals of genome evolution can be accounted for by simple mathematical models similar to those used in statistical physics, such as the birth-death-innovation model. These models do not explicitly incorporate selection; therefore, the observed universal regularities do not appear to be shaped by selection but rather are emergent properties of gene ensembles. Although a complete physical theory of evolutionary biology is inconceivable, the universals of genome evolution might qualify as "laws of evolutionary genomics" in the same sense "law" is understood in modern physics.
Synchrotron quantification of ultrasound cavitation and bubble dynamics in Al-10Cu melts.
Xu, W W; Tzanakis, I; Srirangam, P; Mirihanage, W U; Eskin, D G; Bodey, A J; Lee, P D
2016-07-01
Knowledge of the kinetics of gas bubble formation and evolution under cavitation conditions in molten alloys is important for the control casting defects such as porosity and dissolved hydrogen. Using in situ synchrotron X-ray radiography, we studied the dynamic behaviour of ultrasonic cavitation gas bubbles in a molten Al-10 wt%Cu alloy. The size distribution, average radius and growth rate of cavitation gas bubbles were quantified under an acoustic intensity of 800 W/cm(2) and a maximum acoustic pressure of 4.5 MPa (45 atm). Bubbles exhibited a log-normal size distribution with an average radius of 15.3 ± 0.5 μm. Under applied sonication conditions the growth rate of bubble radius, R(t), followed a power law with a form of R(t)=αt(β), and α=0.0021 &β=0.89. The observed tendencies were discussed in relation to bubble growth mechanisms of Al alloy melts. Copyright © 2016 Elsevier B.V. All rights reserved.
On the efficacy of procedures to normalize Ex-Gaussian distributions
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2015-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588
NASA Astrophysics Data System (ADS)
Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto
2013-08-01
In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.
Statistical properties of the ice particle distribution in stratiform clouds
NASA Astrophysics Data System (ADS)
Delanoe, J.; Tinel, C.; Testud, J.
2003-04-01
This paper presents an extensive analysis of several microphysical data bases CEPEX, EUCREX, CLARE and CARL to determine statistical properties of the Particle Size Distribution (PSD). The data base covers different type of stratiform clouds : tropical cirrus (CEPEX), mid-latitude cirrus (EUCREX) and mid-latitude cirrus and stratus (CARL,CLARE) The approach for analysis uses the concept of normalisation of the PSD developed by Testud et al. (2001). The normalization aims at isolating three independent characteristics of the PSD : its "intrinsic" shape, the "average size" of the spectrum and the ice water content IWC, "average size" is meant the mean mass weighted diameter. It is shown that concentration should be normalized by N_0^* proportional to IWC/D_m^4. The "intrinsic" shape is defined as F(Deq/D_m)=N(Deq)/N_0^* where Deq is the equivalent melted diameter. The "intrinsic" shape is found to be very stable in the range 00
Effects of inspired CO2, hyperventilation, and time on VA/Q inequality in the dog
NASA Technical Reports Server (NTRS)
Tsukimoto, K.; Arcos, J. P.; Schaffartzik, W.; Wagner, P. D.; West, J. B.
1992-01-01
In a recent study by Tsukimoto et al. (J. Appl. Physiol. 68: 2488-2493, 1990), CO2 inhalation appeared to reduce the size of the high ventilation-perfusion ratio (VA/Q) mode commonly observed in anesthetized mechanically air-ventilated dogs. In that study, large tidal volumes (VT) were used during CO2 inhalation to preserve normocapnia. To separate the influences of CO2 and high VT on the VA/Q distribution in the present study, we examined the effect of inspired CO2 on the high VA/Q mode using eight mechanically ventilated dogs (4 given CO2, 4 controls). The VA/Q distribution was measured first with normal VT and then with increased VT. In the CO2 group at high VT, data were collected before, during, and after CO2 inhalation. With normal VT, there was no difference in the size of the high VA/Q mode between groups [10.5 +/- 3.5% (SE) of ventilation in the CO2 group, 11.8 +/- 5.2% in the control group]. Unexpectedly, the size of the high VA/Q mode decreased similarly in both groups over time, independently of the inspired PCO2, at a rate similar to the fall in cardiac output over time. The reduction in the high VA/Q mode together with a simultaneous increase in alveolar dead space (estimated by the difference between inert gas dead space and Fowler dead space) suggests that poorly perfused high VA/Q areas became unperfused over time. A possible mechanism is that elevated alveolar pressure and decreased cardiac output eliminate blood flow from corner vessels in nondependent high VA/Q regions.
Time-dependent breakdown of fiber networks: Uncertainty of lifetime
NASA Astrophysics Data System (ADS)
Mattsson, Amanda; Uesaka, Tetsu
2017-05-01
Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.
Bimodal and multimodal plant biomass particle mixtures
Dooley, James H.
2013-07-09
An industrial feedstock of plant biomass particles having fibers aligned in a grain, wherein the particles are individually characterized by a length dimension (L) aligned substantially parallel to the grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) normal to W and L, wherein the L.times.H dimensions define a pair of substantially parallel side surfaces characterized by substantially intact longitudinally arrayed fibers, the W.times.H dimensions define a pair of substantially parallel end surfaces characterized by crosscut fibers and end checking between fibers, and the L.times.W dimensions define a pair of substantially parallel top and bottom surfaces, and wherein the particles in the feedstock are collectively characterized by having a bimodal or multimodal size distribution.
Normal CAG and CCG repeats in the Huntington`s disease genes of Parkinson`s disease patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubinsztein, D.C.; Leggo, J.; Barton, D.E.
1995-04-24
The clinical features of Parkinson`s disease, particularly rigidity and bradykinesia and occasionally tremor, are seen in juvenile-onset Huntington`s disease. Therefore, the CAG and CCG repeats in the Huntington`s disease gene were investigated in 45 Parkinson`s disease patients and compared to 40 control individuals. All of the Parkinson`s disease chromosomes fell within the normal size ranges. In addition, the distributions of the two repeats in the Parkinson`s disease patients did not differ significantly from those of the control population. Therefore, abnormalities of these trinucleotide repeats in the Huntington`s disease gene are not likely to contribute to the pathogenesis of Parkinson`s disease.more » 12 refs., 2 figs.« less
NASA Astrophysics Data System (ADS)
Jacobson, S.; Scheeres, D.; Rossi, A.; Marzari, F.; Davis, D.
2014-07-01
From the results of a comprehensive asteroid-population-evolution model, we conclude that the YORP-induced rotational-fission hypothesis has strong repercussions for the small size end of the main-belt asteroid size-frequency distribution and is consistent with observed asteroid-population statistics and with the observed sub-populations of binary asteroids, asteroid pairs and contact binaries. The foundation of this model is the asteroid-rotation model of Marzari et al. (2011) and Rossi et al. (2009), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis (i.e. when the rotation rate exceeds a critical value, erosion and binary formation occur; Scheeres 2007) and binary-asteroid evolution (Jacobson & Scheeres, 2011). The YORP-effect timescale for large asteroids with diameters D > ˜ 6 km is longer than the collision timescale in the main belt, thus the frequency of large asteroids is determined by a collisional equilibrium (e.g. Bottke 2005), but for small asteroids with diameters D < ˜ 6 km, the asteroid-population evolution model confirms that YORP-induced rotational fission destroys small asteroids more frequently than collisions. Therefore, the frequency of these small asteroids is determined by an equilibrium between the creation of new asteroids out of the impact debris of larger asteroids and the destruction of these asteroids by YORP-induced rotational fission. By introducing a new source of destruction that varies strongly with size, YORP-induced rotational fission alters the slope of the size-frequency distribution. Using the outputs of the asteroid-population evolution model and a 1-D collision evolution model, we can generate this new size-frequency distribution and it matches the change in slope observed by the SKADS survey (Gladman 2009). This agreement is achieved with both an accretional power-law or a truncated ''Asteroids were Born Big'' size-frequency distribution (Weidenschilling 2010, Morbidelli 2009). The binary-asteroid evolution model is highly constrained by the modeling done in Jacobson & Scheeres, and therefore the asteroid-population evolution model has only two significant free parameters: the ratio of low-to-high-mass-ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. Using this model, we successfully reproduce the observed small-asteroid sub-populations, which orthogonally constrain the two free parameters. We find the outcome of rotational fission most likely produces an initial mass-ratio fraction that is four to eight times as likely to produce high-mass-ratio systems as low-mass-ratio systems, which is consistent with rotational fission creating binary systems in a flat distribution with respect to mass ratio. We also find that the mean of the log-normal BYORP coefficient distribution B ≈ 10^{-2}.
Grassberger, Clemens; Dowdell, Stephen; Lomax, Antony; Sharp, Greg; Shackleford, James; Choi, Noah; Willers, Henning; Paganetti, Harald
2013-01-01
Purpose Quantify the impact of respiratory motion on the treatment of lung tumors with spot scanning proton therapy. Methods and Materials 4D Monte Carlo simulations were used to assess the interplay effect, which results from relative motion of the tumor and the proton beam, on the dose distribution in the patient. Ten patients with varying tumor sizes (2.6-82.3cc) and motion amplitudes (3-30mm) were included in the study. We investigated the impact of the spot size, which varies between proton facilities, and studied single fractions and conventionally fractionated treatments. The following metrics were used in the analysis: minimum/maximum/mean dose, target dose homogeneity and 2-year local control rate (2y-LC). Results Respiratory motion reduces the target dose homogeneity, with the largest effects observed for the highest motion amplitudes. Smaller spot sizes (σ≈3mm) are inherently more sensitive to motion, decreasing target dose homogeneity on average by a factor ~2.8 compared to a larger spot size (σ≈13mm). Using a smaller spot size to treat a tumor with 30mm motion amplitude reduces the minimum dose to 44.7% of the prescribed dose, decreasing modeled 2y-LC from 87.0% to 2.7%, assuming a single fraction. Conventional fractionation partly mitigates this reduction, yielding a 2y-LC of 71.6%. For the large spot size, conventional fractionation increases target dose homogeneity and prevents a deterioration of 2y-LC for all patients. No correlation with tumor volume is observed. The effect on the normal lung dose distribution is minimal: observed changes in mean lung dose and lung V20 are <0.6Gy(RBE) and <1.7% respectively. Conclusions For the patients in this study, 2y-LC could be preserved in the presence of interplay using a large spot size and conventional fractionation. For treatments employing smaller spot sizes and/or in the delivery of single fractions, interplay effects can lead to significant deterioration of the dose distribution and lower 2y-LC. PMID:23462423
NASA Astrophysics Data System (ADS)
Alves, L. G. A.; Ribeiro, H. V.; Lenzi, E. K.; Mendes, R. S.
2014-09-01
We report on the existing connection between power-law distributions and allometries. As it was first reported in Gomez-Lievano et al. (2012) for the relationship between homicides and population, when these urban indicators present asymptotic power-law distributions, they can also display specific allometries among themselves. Here, we present an extensive characterization of this connection when considering all possible pairs of relationships from twelve urban indicators of Brazilian cities (such as child labor, illiteracy, income, sanitation and unemployment). Our analysis reveals that all our urban indicators are asymptotically distributed as power laws and that the proposed connection also holds for our data when the allometric relationship displays enough correlations. We have also found that not all allometric relationships are independent and that they can be understood as a consequence of the allometric relationship between the urban indicator and the population size. We further show that the residuals fluctuations surrounding the allometries are characterized by an almost constant variance and log-normal distributions.
Size exclusion deep bed filtration: Experimental and modelling uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badalyan, Alexander, E-mail: alexander.badalyan@adelaide.edu.au; You, Zhenjiang; Aji, Kaiser
A detailed uncertainty analysis associated with carboxyl-modified latex particle capture in glass bead-formed porous media enabled verification of the two theoretical stochastic models for prediction of particle retention due to size exclusion. At the beginning of this analysis it is established that size exclusion is a dominant particle capture mechanism in the present study: calculated significant repulsive Derjaguin-Landau-Verwey-Overbeek potential between latex particles and glass beads is an indication of their mutual repulsion, thus, fulfilling the necessary condition for size exclusion. Applying linear uncertainty propagation method in the form of truncated Taylor's series expansion, combined standard uncertainties (CSUs) in normalised suspendedmore » particle concentrations are calculated using CSUs in experimentally determined parameters such as: an inlet volumetric flowrate of suspension, particle number in suspensions, particle concentrations in inlet and outlet streams, particle and pore throat size distributions. Weathering of glass beads in high alkaline solutions does not appreciably change particle size distribution, and, therefore, is not considered as an additional contributor to the weighted mean particle radius and corresponded weighted mean standard deviation. Weighted mean particle radius and LogNormal mean pore throat radius are characterised by the highest CSUs among all experimental parameters translating to high CSU in the jamming ratio factor (dimensionless particle size). Normalised suspended particle concentrations calculated via two theoretical models are characterised by higher CSUs than those for experimental data. The model accounting the fraction of inaccessible flow as a function of latex particle radius excellently predicts normalised suspended particle concentrations for the whole range of jamming ratios. The presented uncertainty analysis can be also used for comparison of intra- and inter-laboratory particle size exclusion data.« less
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Mesoporosity as a new parameter for understanding tension stress generation in trees.
Chang, Shan-Shan; Clair, Bruno; Ruelle, Julien; Beauchêne, Jacques; Di Renzo, Francesco; Quignard, Françoise; Zhao, Guang-Jie; Yamamoto, Hiroyuki; Gril, Joseph
2009-01-01
The mechanism for tree orientation in angiosperms is based on the production of high tensile stress on the upper side of the inclined axis. In many species, the stress level is strongly related to the presence of a peculiar layer, called the G-layer, in the fibre cell wall. The structure of the G-layer has recently been described as a hydrogel thanks to N(2) adsorption-desorption isotherms of supercritically dried samples showing a high mesoporosity (pores size from 2-50 nm). This led us to revisit the concept of the G-layer that had been, until now, only described from anatomical observation. Adsorption isotherms of both normal wood and tension wood have been measured on six tropical species. Measurements show that mesoporosity is high in tension wood with a typical thick G-layer while it is much less with a thinner G-layer, sometimes no more than normal wood. The mesoporosity of tension wood species without a G-layer is as low as in normal wood. Not depending on the amount of pores, the pore size distribution is always centred around 6-12 nm. These results suggest that, among species producing fibres with a G-layer, large structural differences of the G-layer exist between species.
Understanding a Normal Distribution of Data.
Maltenfort, Mitchell G
2015-12-01
Assuming data follow a normal distribution is essential for many common statistical tests. However, what are normal data and when can we assume that a data set follows this distribution? What can be done to analyze non-normal data?
QEEN Workshop: "Quantifying Exposure to Engineered Nano ...
The measurement and characterization of nanomaterials in biological tissues is complicated by a number of factors including: the sensitivity of the assay to small sized particles or low concentrations of materials; the ability to distinguish different forms and transformations of the materials related to the biological matrix; distinguishing exogenous nanomaterials, which may be composed of biologically common elements such as carbon,from normal biological tissues; differentiating particle from ionic phases for materials that dissolve; localization of sparsely distributed materials in a complex substrate (the
NASA Technical Reports Server (NTRS)
Witte, Larry C.
1994-01-01
The development of instrumentation for the support of research in two-phase flow in simulated microgravity conditions was performed. The funds were expended in the development of a technique for characterizing the motion and size distribution of small liquid droplets dispersed in a flowing gas. Phenomena like this occur in both microgravity and normal earth gravity situations inside of conduits that are carrying liquid-vapor mixtures at high flow rates. Some effort to develop a conductance probe for the measurement of liquid film thickness was also expended.
Asghari, Fateme; Jahanshahi, Mohsen
2012-09-28
Expanded bed adsorption (EBA), a promising and practical separation technique for adsorption of nanobioproduct/bioproduct, has been widely studied in the past two decades. The development of adsorbent with the special design for expanded bed process is a challenging course. To reduce the costs of adsorbent preparation, fine zinc powder was used as the inexpensive densifier. A series of matrices named Ag-Zn were prepared by water-in-oil emulsification method. The structure and morphology of the prepared matrix were studied by the optical microscope (OM) and scanning electron microscopy (SEM). The physical properties as a function of zinc powder ratio to agarose slurry were measured. The prepared matrices had regular spherical shape, and followed logarithmic normal size distribution with the range of 75-330 μm, mean diameter of 140.54-191.11 μm, wet density of 1.33-2.01 g/ml, water content of 0.45-0.75, porosity of 0.86-0.97 and pore size of about 40-90 nm. The bed expansion factor at the range of 2-3 was examined. The obtained results indicated that the expansion factor was decreased with increasing of matrix density. In addition, it was found that matrices with large particle size were suitable for high operation flow rate. The hydrodynamic properties were determined in expanded bed by the residence time distribution method (RTD). The effects of flow velocity, expansion factor and density of matrix on the hydrodynamic properties were also investigated. Moreover, the influence of particle size distribution on the performance of expanded bed has been studied. Therefore, three different particle size fractions (65-140, 215-280 and 65-280 μm) were assessed. The results indicated that dispersion in liquid-solid expanded beds increased with increasing flow rate and expansion factor; and matrix with a wide particle size distribution leaded to a reduced axial dispersion compared to matrices with a narrow size distribution. The axial dispersion coefficient also enhanced with the increasing of matrix density. It was found that flow rate was the most essential factor to effect on the hydrodynamic characteristics in the bed. For all the prepared matrices, the values of axial mixing coefficients (D(axl)) were smaller than 1.0 × 10⁻⁵ m²/s when flow velocities in expanded bed were less than 700 cm/h. All the results indicate that the prepared matrix show good expansion and stability in expanded bed; and it is suitable for expanded bed processes as an economical adsorbent. Copyright © 2012 Elsevier B.V. All rights reserved.
The retest distribution of the visual field summary index mean deviation is close to normal.
Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz
2016-09-01
When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
Lab-scale ash production by abrasion and collision experiments of porous volcanic samples
NASA Astrophysics Data System (ADS)
Mueller, S. B.; Lane, S. J.; Kueppers, U.
2015-09-01
In the course of explosive eruptions, magma is fragmented into smaller pieces by a plethora of processes before and during deposition. Volcanic ash, fragments smaller than 2 mm, has near-volcano effects (e.g. increasing mobility of PDCs, threat to human infrastructure) but may also cause various problems over long duration and/or far away from the source (human health and aviation matters). We quantify the efficiency of ash generation during experimental fracturing of pumiceous and scoriaceous samples subjected to shear and normal stress fields. Experiments were designed to produce ash by overcoming the yield strength of samples from Tenerife (Canary Islands, Spain), Sicily and Lipari Islands (Italy), with this study having particular interest in the < 355 μm fraction. Fracturing within volcanic conduits, plumes and pyroclastic density currents (PDCs) was simulated through a series of abrasion (shear) and collision (normal) experiments. An understanding of these processes is crucial as they are capable of producing very fine ash (< 10 μm). These particles can remain in the atmosphere for several days and may travel large distances ( 1000s of km). This poses a threat to the aviation industry and human health. From the experiments we establish that abrasion produced the finest-grained material and up to 50% of the generated ash was smaller than 10 μm. In comparison, the collision experiments that applied mainly normal stress fields produced coarser grain sizes. Results were compared to established grain size distributions for natural fall and PDC deposits and good correlation was found. Energies involved in collision and abrasion experiments were calculated and showed an exponential correlation with ash production rate. Projecting these experimental results into the volcanic environment, the greatest amounts of ash are produced in the most energetic and turbulent regions of volcanic flows, which are proximal to the vent. Finest grain sizes are produced in PDCs and can be observed as co-ignimbrite clouds above density currents. Finally, a significant dependency was found between material density and the mass of fines produced, also observable in the total particle size distribution: higher values of open porosity promote the generation of finer-grained particles and overall greater ratios of ash. While this paper draws on numerous previous studies of particle comminution processes, it is the first to analyze and compare results of several comminution experiments with each other in order to characterize these mechanisms.
Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
Plasma Electrolyte Distributions in Humans-Normal or Skewed?
Feldman, Mark; Dickson, Beverly
2017-11-01
It is widely believed that plasma electrolyte levels are normally distributed. Statistical tests and calculations using plasma electrolyte data are often reported based on this assumption of normality. Examples include t tests, analysis of variance, correlations and confidence intervals. The purpose of our study was to determine whether plasma sodium (Na + ), potassium (K + ), chloride (Cl - ) and bicarbonate [Formula: see text] distributions are indeed normally distributed. We analyzed plasma electrolyte data from 237 consecutive adults (137 women and 100 men) who had normal results on a standard basic metabolic panel which included plasma electrolyte measurements. The skewness of each distribution (as a measure of its asymmetry) was compared to the zero skewness of a normal (Gaussian) distribution. The plasma Na + distribution was skewed slightly to the right, but the skew was not significantly different from zero skew. The plasma Cl - distribution was skewed slightly to the left, but again the skew was not significantly different from zero skew. On the contrary, both the plasma K + and [Formula: see text] distributions were significantly skewed to the right (P < 0.01 zero skew). There was also a suggestion from examining frequency distribution curves that K + and [Formula: see text] distributions were bimodal. In adults with a normal basic metabolic panel, plasma potassium and bicarbonate levels are not normally distributed and may be bimodal. Thus, statistical methods to evaluate these 2 plasma electrolytes should be nonparametric tests and not parametric ones that require a normal distribution. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Mohr, Karen I.; Molinari, John; Thorncroft, Chris
2009-01-01
The characteristics of convective system populations in West Africa and the western Pacific tropical cyclone basin were analyzed to investigate whether interannual variability in convective activity in tropical continental and oceanic environments is driven by variations in the number of events during the wet season or by favoring large and/or intense convective systems. Convective systems were defined from Tropical Rainfall Measuring Mission (TRMM) data as a cluster of pixels with an 85-GHz polarization-corrected brightness temperature below 255 K and with an area of at least 64 square kilometers. The study database consisted of convective systems in West Africa from May to September 1998-2007, and in the western Pacific from May to November 1998-2007. Annual cumulative frequency distributions for system minimum brightness temperature and system area were constructed for both regions. For both regions, there were no statistically significant differences between the annual curves for system minimum brightness temperature. There were two groups of system area curves, split by the TRMM altitude boost in 2001. Within each set, there was no statistically significant interannual variability. Subsetting the database revealed some sensitivity in distribution shape to the size of the sampling area, the length of the sample period, and the climate zone. From a regional perspective, the stability of the cumulative frequency distributions implied that the probability that a convective system would attain a particular size or intensity does not change interannually. Variability in the number of convective events appeared to be more important in determining whether a year is either wetter or drier than normal.
Sedimentological Signatures of Transient Depositional Events in the Cariaco Basin, Venezuela
NASA Astrophysics Data System (ADS)
Elmore, A. C.; Thunell, R. C.; Black, D. E.; Murray, R. W.; Martinez, N. C.
2004-12-01
The varved sediments that have accumulated in the Cariaco Basin throughout the Holocene provide a detailed archive of the region's climatic history, and act as a historical record for the occurrence of phenomena such as earthquakes and coastal flooding. In this study we compare the sedimentological characteristics of lithogenic material collected from the water column during transient depositional events to those of normal hemipelagic sedimentation in the basin. Specifically, we have examined the clay mineralogy and grain size distribution of detrital material delivered to the basin by the July 9, 1997 earthquake near Cumana, Venezuela and the coastal flooding of Venezuela in late 1999. The sample material used in our study was collected as part of an ongoing sediment trap time series in the Cariaco Basin. The sedimentological signatures associated with these two events are distinctive from the typical lithogenic input to the basin. Preliminary data for biweekly samples collected from 1997-1999 shows a tri-modal particle size distribution, with peaks at 3, 22, and 80 im. However, material collected from the deep basin immediately following the 1997 earthquake is characterized by a particle diameter distribution at 6 and 22 im with a smaller than normal peak at 80 im; this variance suggests an alternate source of material was delivered to the basin via a turbidity flow induced by the earthquake. Supporting this theory, the clay mineralogy of the same sediment trap samples shows a higher than average ratio of kaolinite to quartz for sediments delivered to the basin following both the earthquake and flooding. We hope to extend the use of these sedimentological methods to identify past transient depositional events in Cariaco Basin cores.
Optimists or realists? How ants allocate resources in making reproductive investments.
Enzmann, Brittany L; Nonacs, Peter
2018-04-24
Parents often face an investment trade-off between either producing many small or fewer large offspring. When environments vary predictably, the fittest parental solution matches available resources by varying only number of offspring and never optimal individual size. However when mismatches occur often between parental expectations and true resource levels, dynamic models like multifaceted parental investment (MFPI) and parental optimism (PO) both predict offspring size can vary significantly. MFPI is a "realist" strategy: parents assume future environments of average richness. When resources exceed expectations and it is too late to add more offspring, the best-case solution increases investment per individual. Brood size distributions therefore track the degree of mismatch from right-skewed around an optimal size (slight underestimation of resources) to left-skewed around a maximal size (gross underestimation). Conversely, PO is an "optimist" strategy: parents assume maximally good resource futures and match numbers to that situation. Normal or lean years do not affect "core" brood as costs primarily fall on excess "marginal" siblings who die or experience stunted growth (producing left-skewed distributions). Investment patterns supportive of both MFPI and PO models have been observed in nature, but studies that directly manipulate food resources to test predictions are lacking. Ant colonies produce many offspring per reproductive cycle and are amenable to experimental manipulation in ways that can differentiate between MFPI and PO investment strategies. Colonies in a natural population of a harvester ant (Pogonomyrmex salinus) were protein-supplemented over 2 years, and mature sexual offspring were collected annually prior to their nuptial flight. Several results support either MFPI or PO in terms of patterns in offspring size distributions and how protein differentially affected male and female production. Unpredicted by either model, however, is that supplementation affected distributions more strongly across years than within (e.g., small females are significantly rarer in the year after colonies receive protein). Parental investment strategies in P. salinus vary dynamically across years and conditions. Finding that past conditions can more strongly affect reproductive decisions than current ones, however, is not addressed by models of parental investment. © 2018 The Authors. Journal of Animal Ecology © 2018 British Ecological Society.
NASA Astrophysics Data System (ADS)
Reza Barati, Mohammad; Selomulya, Cordelia; Suzuki, Kiyonori
2014-05-01
Magnetic nanoparticles with narrow size distributions have successfully been synthesized by an ultrasonic assisted co-precipitation method. The effects of particle size on magnetic properties, heat generation by AC fields, and the cell cytotoxicity were investigated for MgFe2O4 nanoparticles with mean diameters varying from 7 ± 0.5 nm to 29 ± 1 nm. The critical size for superparamagnetic to ferrimagnetic transition (DS→F) of MgFe2O4 was determined to be about 13 ± 0.5 nm at 300 K. The specific absorption rate (SAR) of MgFe2O4 nanoparticles was strongly size dependent; it showed a maximum value of 19 W/g when the particle size was 10 ± 0.5 nm at which the Néel and Brownian relaxations are the major cause of heating. The SAR value was suppressed dramatically by 46% with increasing particle size from 10 ± 0.5 nm to 13 ± 0.5 nm, where Néel relaxation slows down and SAR results primarily from Brownian relaxation loss. A further reduction in SAR value was evident when the size was increased from 13 ± 0.5 nm to 16 ± 1 nm, where the superparamagnetic to ferromagnetic transition occurs. However, SAR showed a tendency to increase with particle size again above 16 ± 1 nm where hysteresis loss becomes the dominant mechanism of heat generation. The particle size dependence of SAR in the superparamagnetic region was well described by considering the effective relaxation time estimated based on a log-normal size distribution. The clear size dependence of SAR is attributable to the high degree of monodispersity of particles synthesized here. The high SAR value of water-based MgFe2O4 magnetic suspension combined with low cell cytotoxicity suggests a great potential of MgFe2O4 nanoparticles for magnetic hyperthermia therapy applications.
Xu, Zhenqiang; Yao, Maosheng
2013-05-01
Increasing evidences show that inhalation of indoor bioaerosols has caused numerous adverse health effects and diseases. However, the bioaerosol size distribution, composition, and concentration level, representing different inhalation risks, could vary with different living environments. The six-stage Andersen sampler is designed to simulate the sampling of different human lung regions. Here, the sampler was used in investigating the bioaerosol exposure in six different environments (student dorm, hospital, laboratory, hotel room, dining hall, and outdoor environment) in Beijing. During the sampling, the Andersen sampler was operated for 30 min for each sample, and three independent experiments were performed for each of the environments. The air samples collected onto each of the six stages of the sampler were incubated on agar plates directly at 26 °C, and the colony forming units (CFU) were manually counted and statistically corrected. In addition, the developed CFUs were washed off the agar plates and subjected to polymerase chain reaction (PCR)-denaturing gradient gel electrophoresis (DGGE) for diversity analysis. Results revealed that for most environments investigated, the culturable bacterial aerosol concentrations were higher than those of culturable fungal aerosols. The culturable bacterial and fungal aerosol fractions, concentration, size distribution, and diversity were shown to vary significantly with the sampling environments. PCR-DGGE analysis indicated that different environments had different culturable bacterial aerosol compositions as revealed by distinct gel band patterns. For most environments tested, larger (>3 μm) culturable bacterial aerosols with a skewed size distribution were shown to prevail, accounting for more than 60 %, while for culturable fungal aerosols with a normal size distribution, those 2.1-4.7 μm dominated, accounting for 20-40 %. Alternaria, Cladosporium, Chaetomium, and Aspergillus were found abundant in most environments studied here. Viable microbial load per unit of particulate matter was also shown to vary significantly with the sampling environments. The results from this study suggested that different environments even with similar levels of total microbial culturable aerosol concentrations could present different inhalation risks due to different bioaerosol particle size distribution and composition. This work fills literature gaps regarding bioaerosol size and composition-based exposure risks in different human dwellings in contrast to a vast body of total bioaerosol levels.
Are CO Observations of Interstellar Clouds Tracing the H2?
NASA Astrophysics Data System (ADS)
Federrath, Christoph; Glover, S. C. O.; Klessen, R. S.; Mac Low, M.
2010-01-01
Interstellar clouds are commonly observed through the emission of rotational transitions from carbon monoxide (CO). However, the abundance ratio of CO to molecular hydrogen (H2), which is the most abundant molecule in molecular clouds is only about 10-4. This raises the important question of whether the observed CO emission is actually tracing the bulk of the gas in these clouds, and whether it can be used to derive quantities like the total mass of the cloud, the gas density distribution function, the fractal dimension, and the velocity dispersion--size relation. To evaluate the usability and accuracy of CO as a tracer for H2 gas, we generate synthetic observations of hydrodynamical models that include a detailed chemical network to follow the formation and photo-dissociation of H2 and CO. These three-dimensional models of turbulent interstellar cloud formation self-consistently follow the coupled thermal, dynamical and chemical evolution of 32 species, with a particular focus on H2 and CO (Glover et al. 2009). We find that CO primarily traces the dense gas in the clouds, however, with a significant scatter due to turbulent mixing and self-shielding of H2 and CO. The H2 probability distribution function (PDF) is well-described by a log-normal distribution. In contrast, the CO column density PDF has a strongly non-Gaussian low-density wing, not at all consistent with a log-normal distribution. Centroid velocity statistics show that CO is more intermittent than H2, leading to an overestimate of the velocity scaling exponent in the velocity dispersion--size relation. With our systematic comparison of H2 and CO data from the numerical models, we hope to provide a statistical formula to correct for the bias of CO observations. CF acknowledges financial support from a Kade Fellowship of the American Museum of Natural History.
Universal Distribution of Litter Decay Rates
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2008-12-01
Degradation of litter is the result of many physical, chemical and biological processes. The high variability of these processes likely accounts for the progressive slowdown of decay with litter age. This age dependence is commonly thought to result from the superposition of processes with different decay rates k. Here we assume an underlying continuous yet unknown distribution p(k) of decay rates [1]. To seek its form, we analyze the mass-time history of 70 LIDET [2] litter data sets obtained under widely varying conditions. We construct a regularized inversion procedure to find the best fitting distribution p(k) with the least degrees of freedom. We find that the resulting p(k) is universally consistent with a lognormal distribution, i.e.~a Gaussian distribution of log k, characterized by a dataset-dependent mean and variance of log k. This result is supported by a recurring observation that microbial populations on leaves are log-normally distributed [3]. Simple biological processes cause the frequent appearance of the log-normal distribution in ecology [4]. Environmental factors, such as soil nitrate, soil aggregate size, soil hydraulic conductivity, total soil nitrogen, soil denitrification, soil respiration have been all observed to be log-normally distributed [5]. Litter degradation rates depend on many coupled, multiplicative factors, which provides a fundamental basis for the lognormal distribution. Using this insight, we systematically estimated the mean and variance of log k for 512 data sets from the LIDET study. We find the mean strongly correlates with temperature and precipitation, while the variance appears to be uncorrelated with main environmental factors and is thus likely more correlated with chemical composition and/or ecology. Results indicate the possibility that the distribution in rates reflects, at least in part, the distribution of microbial niches. [1] B. P. Boudreau, B.~R. Ruddick, American Journal of Science,291, 507, (1991). [2] M. Harmon, Forest Science Data Bank: TD023 [Database]. LTER Intersite Fine Litter Decomposition Experiment (LIDET): Long-Term Ecological Research, (2007). [3] G.~A. Beattie, S.~E. Lindow, Phytopathology 89, 353 (1999). [4] R.~A. May, Ecology and Evolution of Communities/, A pattern of Species Abundance and Diversity, 81 (1975). [5] T.~B. Parkin, J.~A. Robinson, Advances in Soil Science 20, Analysis of Lognormal Data, 194 (1992).
Effect of MUF/Epoxy Microcapsules on Mechanical Properties and Fractography of Epoxy Materials
NASA Astrophysics Data System (ADS)
Ni, Zhuo; Lin, Yuhao; Du, Xuexiao
2017-12-01
Melamine-urea-formaldehyde (MUF) microcapsules were synthesized, morphology, shell thickness, average diameter and interface morphology were studied by scanning electron microscope (SEM). The spherical MUF microcapsules are size normal distribution without adhesion and accumulation, being compact, rough and uneven with a thickness of 3.2μm and a core contents is approximate 70%. A latent imidazoleas the curing agent for a cross-linking chemical reaction for cracking repairing. A good dispersion of MUF microcapsules and a good interfacial bonding are obtained. Effects of MUF microcapsule size and content on bending property and dynamic mechanical propertywere investigated. Both bending strength and storage modulus of the composite are considerably reduced with an increasing addition of the microcapsules whereas the glass transition temperatures are almost not influenced. Significant toughening effects of MUF microcapsules on the epoxy composites are observed at the conditions of different content and size of microcapsule especially at low microcapsule contents and small microcapsule sizes.
Hydrodynamic fractionation of finite size gold nanoparticle clusters.
Tsai, De-Hao; Cho, Tae Joon; DelRio, Frank W; Taurozzi, Julian; Zachariah, Michael R; Hackley, Vincent A
2011-06-15
We demonstrate a high-resolution in situ experimental method for performing simultaneous size classification and characterization of functional gold nanoparticle clusters (GNCs) based on asymmetric-flow field flow fractionation (AFFF). Field emission scanning electron microscopy, atomic force microscopy, multi-angle light scattering (MALS), and in situ ultraviolet-visible optical spectroscopy provide complementary data and imagery confirming the cluster state (e.g., dimer, trimer, tetramer), packing structure, and purity of fractionated populations. An orthogonal analysis of GNC size distributions is obtained using electrospray-differential mobility analysis (ES-DMA). We find a linear correlation between the normalized MALS intensity (measured during AFFF elution) and the corresponding number concentration (measured by ES-DMA), establishing the capacity for AFFF to quantify the absolute number concentration of GNCs. The results and corresponding methodology summarized here provide the proof of concept for general applications involving the formation, isolation, and in situ analysis of both functional and adventitious nanoparticle clusters of finite size. © 2011 American Chemical Society
NASA Astrophysics Data System (ADS)
Pu, X.; An, R.; Li, R.; Huang, W.; Li, J.
2017-12-01
The objectives of the current study are to investigate the spatial, temperal variation of phisphorus (P) fraction in middle reaches of the Yarlung Zangbo River of China. Samples were collected in April (dry season), August (wet season), and Octber (normal season) along with the middle reaches from Lazi site to Nuxia sitewhich which is about 1000km long. Sequential extraction were applied to determine the forms of phosphorus in suspended particles and to assess the potential bioavailability of particulate P. The results indicated that the distribution of suspended particle size inflenced not only the total phosphorus concentration, but also the proportions of different forms of phosphorus. The exchangeable phosphorus (Ex-P), Fe-bound-P, Ca-bound-P were the most aboundant forms and the highest proportions of total P. The total P concentrations were closely relative to the concentration of suspended particles. According to the characteristics of suspended particles in the Yarlung Zangbo River, the relationship between the suspended particles size and species of phosphorus was established though statistical analysis. The Ex-P increased with the decreasing of suspended particulate size. The content of bioavailable particulate phosphorus varied greatly with the proportions of particulate size. In genral, the higher the proportion of smaller particle size, the higher the content of bioavailable phosphorus. The main factors which affect the phosphorus transportation in Yarlung Zangbo River had also been discussed.
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silosky, M; Marsh, R
Purpose: Localizer projection radiographs acquired prior to CT scans are used to estimate patient size, affecting the function of Automatic Tube Current Modulation (ATCM) and hence CTDIvol and SSDE. Due to geometric effects, the projected patient size varies with scanner table height and with the orientation of the localizer (AP versus PA). This study sought to determine if patient size estimates made from localizer scans is affected by variations in fat distribution, specifically when the widest part of the patient is not at the geometric center of the patient. Methods: Lipid gel bolus material was wrapped around an anthropomorphic phantommore » to simulate two different body mass distributions. The first represented a patient with fairly rigid fat and had a generally oval shape. The second was bell-shaped, representing corpulent patients more susceptible to gravity’s lustful tug. Each phantom configuration was imaged using an AP localizer and then a PA localizer. This was repeated at various scanner table heights. The width of the phantom was measured from the localizer and diagnostic images using in-house software. Results: 1) The projected phantom width varied up to 39% as table height changed.2) At some table heights, the width of the phantom, designed to represent larger patients, exceeded the localizer field of view, resulting in an underestimation of the phantom width.3) The oval-shaped phantom approached a normalized phantom width of 1 at a table height several centimeters lower (AP localizer) or higher (PA localizer) than did the bell-shaped phantom. Conclusion: Accurate estimation of patient size from localizer scans is dependent on patient positioning with respect to scanner isocenter and is limited in large patients. Further, patient size is more accurately measured on projection images if the widest part of the patient, rather than the geometric center of the patient, is positioned at scanner isocenter.« less
NASA Astrophysics Data System (ADS)
Dierks, Karsten; Dieckmann, Matthias; Niederstrasser, Dirk; Schwartz, R.; Wegener, Alfred R.
1995-01-01
Laser light source powers of our instrument LEPOSIAR are minimized so that dynamic light scattering (DLS) measurements can be conducted with lowest intensity levels on human eye lenses (3.2 mW/cm2) within measurement times of 3 to 5 seconds. We describe an extension of DLS and an eye lens characterization along the optical axis (OA), revealing the molecular size ranges together with distributions found in lens regions parallel to the OA. The microstructures of various lens regions are separated by the analyzed radius distributions reflecting the visco-elastic properties of the eye lens. Detailed analysis and applied statistical categorization of results are described. The data obtained by DLS allow for an objective interpretation of opacity occurrences on molecular size range which are related to refraction anomalies. Apart from changes in color and fluorescence properties, the refractive anomalies can be assumed as the origin of the cataract. The bimodal radius distributions are characterized as a function of patient ages varying from 9 to 85 years. Our clinical study on 42 subjects proves a size decrease of the monomeric fraction with age, whereby their relative frequency of occurrence decreases. The larger polymeric radius fractions which are detected in lenses of all subjects increase with their age. An increase of protein polymer size is likely to be linked to the decrease of the (gamma) -crystallin fraction in old eye lens nuclei. Our preliminary analysis of clinical results is corresponding to the chemical gradients parallel to the OA of the lenses which are reported by O. Hockwin after examination on extracted eye lenses. The normalization of the lens densitometric data derived after Scheimpflug- photography against the protein size fraction analysis by DLS is performed. As a comparison mean between both techniques on identical eye lenses, an absorption corrected densitometry is conducted for the first time.
NASA Astrophysics Data System (ADS)
Roda-Boluda, Duna; D'Arcy, Mitch; Whittaker, Alex; McDonald, Jordan
2017-04-01
Sediment supply from hillslopes -including volumes, rates and grain size distributions- controls the sediment fluxes from upland areas and modulates how landscapes respond to tectonics. Here, we present new field data from tectonically-active areas in southern Italy that quantifies how lithology and rock-mass strength control the delivery processes and grain size distributions of sediment supplied from hillslopes. We evaluate the influence of landslides on sediment supply along 8 normal faults with excellent tectonic constraints. Frequency-area analysis of the landslide inventory, and a new field-calibrated area-volume scaling relationship, reveal that landsliding in the area is not dominated by large landslides (β ˜2), with 83% of landslides being < 0.1 km2 and shallower than 3 m. Based on volumetric estimates and published erosion rates, we infer that our inventory likely represents the integrated record of landsliding over 1-3 kyrs, implying minimum sediment fluxes between 6.90 x 102 and 2.07 x 103 m3/yr. We demonstrate that outcrop-scale rock-mass strength controls both landslide occurrence and the grain sizes supplied by bedrock weathering, for different lithologies. Comparisons of particle size distributions from bedrock weathering with those measured on landslide deposits demonstrates that landslides supply systematically coarser material, with lithology influencing the degree of coarsening. Finally, we evaluate the effect of landslide supply on fluvial sediment export, and show that D84 grain size increases by ˜ 6 mm for each 100-m increment in incision depth, due to the combination of enhanced landsliding and transport capacity in more incised catchments. Our results reveal a dual control of lithology and rock-mass strength on both the sediment volumes and grain sizes supplied to the fluvial system, which we demonstrate has a significant impact on sediment export from upland areas. This study provides a uniquely detailed field data set for studying how tectonics and lithology control hillslope erosion and sediment characteristics.
Cathepsin D in normal and neoplastic thyroid tissues.
Kraimps, J L; Métayé, T; Millet, C; Margerit, D; Ingrand, P; Goujon, J M; Levillain, P; Babin, P; Begon, F; Barbier, J
1995-12-01
Cathepsin D is a widely distributed lysosomal acidic endopeptidase. It is an estrogen-regulated protein that is a prognostic factor in breast cancer. The aim of this study was to measure cathepsin D concentrations in thyroid tissues and to correlate these concentrations with clinical and pathologic parameters. Cathepsin D and thyroglobulin concentrations were measured in the cytosol of normal thyroid tissues (n = 14), benign nodules (n = 6), and thyroid carcinomas (n = 32) with an immunoradiometric assay. Statistical analysis was based on the Kruskal-Wallis and Wilcoxon tests and on the Spearman rank correlation coefficient. The mean level of cathepsin D, expressed as picomoles per milligram protein minus thyroglobulin, was higher in the 32 carcinomas, 29.1 +/- 15.5, than in the 14 normal thyroid tissues, 8.4 +/- 2.5 (p < 0.001) or in the 6 benign nodules, 11.2 +/- 7.3 (p = 0.003). Cathepsin D concentrations correlated with tumor size; Spearman rank correlation coefficient was rs = 0.44 (p = 0.012). No significant difference was found regarding histologic type. Cathepsin D concentrations were inversely correlated with the thyroglobulin level in the tumor; Spearman rank correlation coefficient was rs = -0.60 (p < 0.001). Cathepsin D concentration is higher in thyroid carcinoma than in normal thyroid tissue. Increased cathepsin D concentrations correlate with thyroid tumor size but not with histologic type. Further studies should be done to confirm the potential prognostic value of cathepsin D in patients with thyroid carcinomas.
Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity
Beasley, T. Mark
2013-01-01
Increasing the correlation between the independent variable and the mediator (a coefficient) increases the effect size (ab) for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation due to increases in a at some point outweighs the increase of the effect size (ab) and results in a loss of statistical power. This phenomenon also occurs with nonparametric bootstrapping approaches because the variance of the bootstrap distribution of ab approximates the variance expected from normal theory. Both variances increase dramatically when a exceeds the b coefficient, thus explaining the power decline with increases in a. Implications for statistical analysis and applied researchers are discussed. PMID:24954952
Zero Gravity Aircraft Testing of a Prototype Portable Fire Extinguisher for Use in Spacecraft
NASA Astrophysics Data System (ADS)
Butz, J.; Carriere, T.; Abbud-Madrid, A.; Easton, J.
2012-01-01
For the past five years ADA Technologies has been developing a portable fire extinguisher (PFE) for use in microgravity environments. This technology uses fine water mist (FWM) to effectively and efficiently extinguish fires representative of spacecraft hazards. Recently the FWM PFE was flown on a Zero-G (reduced gravity) aircraft to validate the performance of the technology in a microgravity environment. Test results demonstrated that droplet size distributions generated in the reduced gravity environment were in the same size range as data collected during normal gravity (1-g) discharges from the prototype PFE. Data taken in an obscured test configuration showed that the mist behind the obstacle was more dense in the low-g environment when compared to 1-g discharges. The mist behind the obstacle tended to smaller droplet sizes in both the low-g and 1-g test conditions.
The role of peripheral vision in saccade planning: learning from people with tunnel vision.
Luo, Gang; Vargas-Martin, Fernando; Peli, Eli
2008-12-22
Both visually salient and top-down information are important in eye movement control, but their relative roles in the planning of daily saccades are unclear. We investigated the effect of peripheral vision loss on saccadic behaviors in patients with tunnel vision (visual field diameters 7 degrees-16 degrees) in visual search and real-world walking experiments. The patients made up to two saccades per second to their pre-saccadic blind areas, about half of which had no overlap between the post- and pre-saccadic views. In the visual search experiment, visual field size and the background (blank or picture) did not affect the saccade sizes and direction of patients (n = 9). In the walking experiment, the patients (n = 5) and normal controls (n = 3) had similar distributions of saccade sizes and directions. These findings might provide a clue about the large extent of the top-down mechanism influence on eye movement control.
Role of peripheral vision in saccade planning: Learning from people with tunnel vision
Luo, Gang; Vargas-Martin, Fernando; Peli, Eli
2008-01-01
Both visually salient and top-down information are important in eye movement control, but their relative roles in the planning of daily saccades are unclear. We investigated the effect of peripheral vision loss on saccadic behaviors in patients with tunnel vision (visual field diameters 7°–16°) in visual search and real-world walking experiments. The patients made up to two saccades per second to their pre-saccadic blind areas, about half of which had no overlap between the post- and pre-saccadic views. In the visual search experiment, visual field size and the background (blank or picture) did not affect the saccade sizes and direction of patients (n=9). In the walking experiment, the patients (n=5) and normal controls (n=3) had similar distributions of saccade sizes and directions. These findings might provide a clue about the extent of the top-down mechanism influence on eye movement control. PMID:19146326
Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis
NASA Astrophysics Data System (ADS)
Das, Samiran
2018-04-01
The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.
Mixing at double-Tee junctions with unequal pipe sizes in ...
Pipe flow mixing with various solute concentrations and flow rates at pipe junctions is investigated. The degree of mixing affects the spread of contaminants in a water distribution system. Many studies have been conducted on the mixing at the cross junctions. Yet a few have focused on double-Tee junctions of unequal pipe sizes. To investigate the solute mixing at double-Tee junctions with unequal pipe sizes, a series of experiments were conducted in a turbulent regime (Re=12500–50000) with different Reynolds number ratios and connecting pipe lengths. It is shown that dimensionless outlet concentrations depended on mixing mechanism at the impinging interface of junctions. Junction with a larger pipe size ratio is associated with more complete mixing. The inlet Reynolds number ratio affects mixing more strongly than the outlet Reynolds number ratio. Furthermore, the dimensionless connecting pipe length in a double-Tee played an important and complicated role in the flow mixing. Based on these results, two-dimensional isopleth maps were developed for the calculation of normalized north outlet concentration. This journal article is to communicate the research results on pipe juncture mixing, a widespread and important phenomena in distribution system water quality analysis. The research outcome improves EPANET modeling capability for safe water supplies. In addition, the research is one of the outputs from the EPA-MOST bilateral cooperative research Project #1
The use of nano-sized eggshell powder for calcium fortification of cow?s and buffalo?s milk yogurts.
El-Shibiny, Safinaze; El-Gawad, Mona Abd El-Kader Mohamed Abd; Assem, Fayza Mohamed; El-Sayed, Samah Mosbah
2018-01-01
Calcium is an essential element for the growth, activity, and maintenance of the human body. Eggshells are a waste product which has received growing interest as a cheap and effective source of dietary calcium. Yogurt is a food which can be fortified with functional additives, including calcium. The aim of this study was to produce yogurt with a high calcium content by fortification with nano-sized eggshell powder (nano-ESP). Nano-sized ESP was prepared from pre-boiled and dried eggshell, using a ball mill. Yogurt was prepared from cow’s milk supplemented with 3% skimmed milk powder, and from buffalo’s milk fortified with 0.1, 0.2 and 0.3% and 0.1, 0.3 and 0.5% nano-ESP respectively. Electron microscopic transmission showed that the powder consisted of nano-sized crystalline struc- tures (~10 nm). Laser scattering showed that particles followed a normal distribution pattern with z-average of 590.5 nm, and had negative zeta-potential of –9.33 ±4.2 mV. Results regarding changes in yogurt composi- tion, acid development, calcium distribution, biochemical changes, textural parameters and sensory attributes have been presented and discussed. The addition of up to 0.3% nano-ESP made cow and buffalo high-calcium yogurts with an acceptable composition and quality. High-calcium yogurt may offer better health benefits, such as combating osteoporosis.
Huang, Jing; Bu, Lihong; Xie, Jin; Chen, Kai; Cheng, Zhen; Li, Xingguo; Chen, Xiaoyuan
2010-01-01
The effect of nanoparticle size (30–120 nm) on magnetic resonance imaging (MRI) of hepatic lesions in vivo has been systematically examined using polyvinylpyrrolidone (PVP)-coated iron oxide nanoparticles (PVP-IOs). Such biocompatible PVP-IOs with different sizes were synthesized by a simple one-pot pyrolysis method. These PVP-IOs exhibited good crystallinity and high T2 relaxivities, and the relaxivity increased with the size of the magnetic nanoparticles. It was found that cellular uptake changed with both size and surface physiochemical properties, and that PVP-IO-37 with a core size of 37 nm and hydrodynamic particle size of 100 nm exhibited higher cellular uptake rate and greater distribution than other PVP-IOs and Feridex. We systematically investigated the effect of nanoparticle size on MRI of normal liver and hepatic lesions in vivo. The physical and chemical properties of the nanoparticles influenced their pharmacokinetic behavior, which ultimately determined their ability to accumulate in the liver. The contrast enhancement of PVP-IOs within the liver was highly dependent on the overall size of the nanoparticles, and the 100 nm PVP-IO-37 nanoparticles exhibited the greatest enhancement. These results will have implications in designing engineered nanoparticles that are optimized as MR contrast agents or for use in therapeutics. PMID:21043459
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
Preparation of SiO2@Ag Composite Nanoparticles and Their Antimicrobial Activity.
Qin, Rui; Li, Guian; Pan, Liping; Han, Qingyan; Sun, Yan; He, Qiao
2017-04-01
At normal atmospheric temperature, the modified sol–gel method was employed to synthesize SiO2 nanospheres (SiO2 NSs) whose average size was about 352 nm. Silver nanoparticles (Ag NPs) were uniformly distributed on the surface of SiO2 nanospheres (SiO2 NSs) by applying chemical reduction method at 95 °C and the size of silver nanoparticles (Ag NPs) could be controlled by simply tuning the reaction time and the concentration of sodium citrate. Besides, the size, morphology, structure and optical absorption properties of SiO2@Ag composite nanoparticles were measured and characterized by laser particle size analyzer (LPSA), transmission electron microscope (TEM), scanning electron microscope (SEM), X-ray diffraction (XRD) and ultraviolet visible absorption spectrometer (UV-Vis), respectively. Furthermore, antimicrobial effect experiments that against gram-negative bacteria (E. coli) and gram-positive bacteria (S. aureus) were carried out to characterize the antibacterial activity of synthesized SiO2@Ag composite nanoparticles. The results show that the prepared SiO2@Ag composite nanoparticles have strong antimicrobial activity, which is associated with the size of silver nanoparticles.
Trainor, Patrick J; DeFilippis, Andrew P; Rai, Shesh N
2017-06-21
Statistical classification is a critical component of utilizing metabolomics data for examining the molecular determinants of phenotypes. Despite this, a comprehensive and rigorous evaluation of the accuracy of classification techniques for phenotype discrimination given metabolomics data has not been conducted. We conducted such an evaluation using both simulated and real metabolomics datasets, comparing Partial Least Squares-Discriminant Analysis (PLS-DA), Sparse PLS-DA, Random Forests, Support Vector Machines (SVM), Artificial Neural Network, k -Nearest Neighbors ( k -NN), and Naïve Bayes classification techniques for discrimination. We evaluated the techniques on simulated data generated to mimic global untargeted metabolomics data by incorporating realistic block-wise correlation and partial correlation structures for mimicking the correlations and metabolite clustering generated by biological processes. Over the simulation studies, covariance structures, means, and effect sizes were stochastically varied to provide consistent estimates of classifier performance over a wide range of possible scenarios. The effects of the presence of non-normal error distributions, the introduction of biological and technical outliers, unbalanced phenotype allocation, missing values due to abundances below a limit of detection, and the effect of prior-significance filtering (dimension reduction) were evaluated via simulation. In each simulation, classifier parameters, such as the number of hidden nodes in a Neural Network, were optimized by cross-validation to minimize the probability of detecting spurious results due to poorly tuned classifiers. Classifier performance was then evaluated using real metabolomics datasets of varying sample medium, sample size, and experimental design. We report that in the most realistic simulation studies that incorporated non-normal error distributions, unbalanced phenotype allocation, outliers, missing values, and dimension reduction, classifier performance (least to greatest error) was ranked as follows: SVM, Random Forest, Naïve Bayes, sPLS-DA, Neural Networks, PLS-DA and k -NN classifiers. When non-normal error distributions were introduced, the performance of PLS-DA and k -NN classifiers deteriorated further relative to the remaining techniques. Over the real datasets, a trend of better performance of SVM and Random Forest classifier performance was observed.
Reconstruction of doses and deposition in the western trace from the Chernobyl accident.
Sikkeland, T; Skuterud, L; Goltsova, N I; Lindmo, T
1997-05-01
A model is presented for the explosive cloud of particulates that produced the western trace of high radioactive ground contamination in the Chernobyl accident on 26 April 1986. The model was developed to reproduce measured dose rates and nuclide contamination and to relate estimated doses to observed changes in: (1) infrared emission from the foliage and (2) morphological and histological structures of individual pines. Dominant factors involved in ground contamination were initial cloud shape, particle size distribution, and rate of particle fallout. At time of formation, the cloud was assumed to be parabolical and to contain a homogeneous distribution of spherically shaped fuel particulates having a log-normal size distribution. The particulates were dispersed by steady winds and diffusion that produced a straight line deposition path. The analysis indicates that two clouds, denoted by Cloud I and Cloud II, were involved. Fallout from the former dominated the far field region and fallout from latter the region near the reactor. At formation they had a full width at half maximum of 1800 m and 500 m, respectively. For wind velocities of 5-10 m s(-1) the particulates' radial distribution at formation had a standard deviation and mode of 1.8 microm and 0.5 microm, respectively. This distribution corresponds to a release of 390 GJ in the runaway explosion. The clouds' height and mass are not uniquely determined but are coupled together. For an initial height of 3,600 m, Cloud I contained about 400 kg fuel. For Cloud II the values were, respectively, 1,500 m and 850 kg. Loss of activities from the clouds is found to be small. Values are obtained for the rate of radionuclide migration from the deposit. Various types of biological damage to pines, as reported in the literature, are shown to be mainly due to ionizing radiation from the deposit by Cloud II. A formula is presented for the particulate size distribution in the trace area.
Lingard, Justin J N; Agus, Emily L; Young, David T; Andrews, Gordon E; Tomlin, Alison S
2006-12-01
A summertime study of the number concentration and the size distribution of combustion derived nanometre sized particles (termed nanoparticles) from diesel and spark-ignition (SI) engine emissions were made under rush-hour and free-flow traffic conditions at an urban roadside location in Leeds, UK in July 2003. The measured total particle number concentrations (N(TOTAL)) were of the order 1.8 x 10(4) to 3.4 x 10(4) cm(-3), and tended to follow the diurnal traffic flow patterns. The N(TOTAL) was dominated by particles < or =100 nm in diameter which accounted for between 89-93% of the measured particle number. By use of a log-normal fitting procedure, the modal parameters of the number based particle size distribution of urban airborne particulates were derived from the roadside measurements. Four component modes were identified. Two nucleation modes were found, with a smaller, more minor, mode composed principally of sub-11 nm particles, believed to be derived from particles formed from the nucleation of gaseous species in the atmosphere. A second mode, much larger in terms of number, was composed of particles within the size range of 10-20 nm. This second mode was believed to be principally derived from the condensation of the unburned fuel and lube oil (the solvent organic fraction or SOF) as it cooled on leaving the engine exhaust. Third and fourth modes were noted within the size ranges of 28-65 nm and 100-160 nm, respectively. The third mode was believed to be representative of internally mixed Aitken mode particles composed of a soot/ash core with an adsorbed layer of readily volatilisable material. The fourth mode was believed to be composed of chemically aged, secondary particles. The larger nucleation and Aitken modes accounted for between 80-90% of the measured N(TOTAL), and the particles in these modes were believed to be derived from SI and diesel engine emissions. The overall size distribution, particularly in modes II-IV, was observed to be strongly related to the number of primary particle emissions, with larger count median diameters observed under conditions where low numbers of primary soot based particles were present.
A novel generalized normal distribution for human longevity and other negatively skewed data.
Robertson, Henry T; Allison, David B
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.
A Novel Generalized Normal Distribution for Human Longevity and other Negatively Skewed Data
Robertson, Henry T.; Allison, David B.
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution. PMID:22623974
Canopy architecture of a walnut orchard
NASA Technical Reports Server (NTRS)
Ustin, Susan L.; Martens, Scott N.; Vanderbilt, Vern C.
1991-01-01
A detailed dataset describing the canopy geometry of a walnut orchard was acquired to support testing and comparison of the predictions of canopy microwave and optical inversion models. Measured canopy properties included the quantity, size, and orientation of stems, leaves, and fruit. Eight trees receiving 100 percent of estimated potential evapotranspiration water use and eight trees receiving 33 percent of potential water use were measured. The vertical distributions of stem, leaf, and fruit properties are presented with respect to irrigation treatment. Zenith and probability distributions for stems and leaf normals are presented. These data show that, after two years of reduced irrigation, the trees receiving only 33 percent of their potential water requirement had reduced fruit yields, lower leaf area index, and altered allocation of biomass within the canopy.
In Situ Balloon-Borne Ice Particle Imaging in High-Latitude Cirrus
NASA Astrophysics Data System (ADS)
Kuhn, Thomas; Heymsfield, Andrew J.
2016-09-01
Cirrus clouds reflect incoming solar radiation, creating a cooling effect. At the same time, these clouds absorb the infrared radiation from the Earth, creating a greenhouse effect. The net effect, crucial for radiative transfer, depends on the cirrus microphysical properties, such as particle size distributions and particle shapes. Knowledge of these cloud properties is also needed for calibrating and validating passive and active remote sensors. Ice particles of sizes below 100 µm are inherently difficult to measure with aircraft-mounted probes due to issues with resolution, sizing, and size-dependent sampling volume. Furthermore, artefacts are produced by shattering of particles on the leading surfaces of the aircraft probes when particles several hundred microns or larger are present. Here, we report on a series of balloon-borne in situ measurements that were carried out at a high-latitude location, Kiruna in northern Sweden (68N 21E). The method used here avoids these issues experienced with the aircraft probes. Furthermore, with a balloon-borne instrument, data are collected as vertical profiles, more useful for calibrating or evaluating remote sensing measurements than data collected along horizontal traverses. Particles are collected on an oil-coated film at a sampling speed given directly by the ascending rate of the balloon, 4 m s-1. The collecting film is advanced uniformly inside the instrument so that an always unused section of the film is exposed to ice particles, which are measured by imaging shortly after sampling. The high optical resolution of about 4 µm together with a pixel resolution of 1.65 µm allows particle detection at sizes of 10 µm and larger. For particles that are 20 µm (12 pixel) in size or larger, the shape can be recognized. The sampling volume, 130 cm3 s-1, is well defined and independent of particle size. With the encountered number concentrations of between 4 and 400 L-1, this required about 90- to 4-s sampling times to determine particle size distributions of cloud layers. Depending on how ice particles vary through the cloud, several layers per cloud with relatively uniform properties have been analysed. Preliminary results of the balloon campaign, targeting upper tropospheric, cold cirrus clouds, are presented here. Ice particles in these clouds were predominantly very small, with a median size of measured particles of around 50 µm and about 80 % of all particles below 100 µm in size. The properties of the particle size distributions at temperatures between -36 and -67 °C have been studied, as well as particle areas, extinction coefficients, and their shapes (area ratios). Gamma and log-normal distribution functions could be fitted to all measured particle size distributions achieving very good correlation with coefficients R of up to 0.95. Each distribution features one distinct mode. With decreasing temperature, the mode diameter decreases exponentially, whereas the total number concentration increases by two orders of magnitude with decreasing temperature in the same range. The high concentrations at cold temperatures also caused larger extinction coefficients, directly determined from cross-sectional areas of single ice particles, than at warmer temperatures. The mass of particles has been estimated from area and size. Ice water content (IWC) and effective diameters are then determined from the data. IWC did vary only between 1 × 10-3 and 5 × 10-3 g m-3 at temperatures below -40 °C and did not show a clear temperature trend. These measurements are part of an ongoing study.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Distribution and determinants of QRS rotation of black and white persons in the general population.
Prineas, Ronald J; Zhang, Zhu-Ming; Stevens, Cladd E; Soliman, Elsayed Z
The prevalence and determinants of QRS transition zones are not well established. We examined the distributions of Normal, clockwise (CW) and counterclockwise (CCW)) QRS transition zones and their relations to disease, body size and demographics in 4624 black and white men and women free of cardiovascular disease and major ECG abnormalities enrolled in the NHANES-III survey. CW transition zones were least observed (6.2%) and CCW were most prevalent (60.1%) with Normal in an intermediate position (33.7%). In multivariable logistic regression analysis, the adjusted, significant predictors for CCW compared to Normal were a greater proportion of blacks and women, fewer thin people (BMI<20, thin), a greater ratio of chest depth to chest width, and an LVMass index <80g. By contrast, CW persons were older, had larger QRS/T angles, smaller ratio of chest depth to chest width, had a greater proportion of subjects with low voltage QRS, more pulmonary disease, a greater proportion with high heart rates, shorter QRS duration and were more obese (BMI≥30). Normal rather than being the most prevalent transition zone was intermediate in frequency between the most frequently encountered CCW and the least frequently encountered transition zone CW. Differences in the predictors of CW and CCW exist. This requires further investigation to examine how far these differences explain the differences in the published prognostic differences between CW and CCW. Copyright © 2017 Elsevier Inc. All rights reserved.
Spatial distribution of soil water repellency in a grassland located in Lithuania
NASA Astrophysics Data System (ADS)
Pereira, Paulo; Novara, Agata
2014-05-01
Soil water repellency (SWR) it is recognized to be very heterogeneous in time in space and depends on soil type, climate, land use, vegetation and season (Doerr et al., 2002). It prevents or reduces water infiltration, with important impacts on soil hydrology, influencing the mobilization and transport of substances into the soil profile. The reduced infiltration increases surface runoff and soil erosion. SWR reduce also the seed emergency and plant growth due the reduced amount of water in the root zone. Positive aspects of SWR are the increase of soil aggregate stability, organic carbon sequestration and reduction of water evaporation (Mataix-Solera and Doerr, 2004; Diehl, 2013). SWR depends on the soil aggregate size. In fire affected areas it was founded that SWR was more persistent in small size aggregates (Mataix-Solera and Doerr, 2004; Jordan et al., 2011). However, little information is available about SWR spatial distribution according to soil aggregate size. The aim of this work is study the spatial distribution of SWR in fine earth (<2 mm) and different aggregate sizes, 2-1 mm, 1-0.5 mm, 0.5-0.25 mm and <0.25 mm. The studied area is located near Vilnius (Lithuania) at 54° 42' N, 25° 08 E, 158 masl. A plot with 400 m2 (20 x 20 m with 5 m space between sampling points) and 25 soil samples were collected in the top soil (0-5 cm) and taken to the laboratory. Previously to SWR assessment, the samples were air dried. The persistence of SWR was analysed according to the Water Drop Penetration Method, which involves placing three drops of distilled water onto the soil surface and registering the time in seconds (s) required for the drop complete penetration (Wessel, 1988). Data did not respected Gaussian distribution, thus in order to meet normality requirements it was log-normal transformed. Spatial interpolations were carried out using Ordinary Kriging. The results shown that SWR was on average in fine earth 2.88 s (Coeficient of variation % (CV%)=44.62), 2-1mm 1.73 s (CV%=45.10), 1-0.5 mm 2.02 s (CV%=93.75), 0.5-0.25 mm 3.12 s (CV%=233.68) and in <0.25 mm 15.54 mm (CV%=240.74). This suggests that SWR persistence and CV% is higher in small size aggregates than in the coarser aggregate sizes. The interpolated maps showed that in fine earth SWR was higher in the western part of the studied plot and lower in the central area. In the 2-1 mm aggregate size it was higher in the southwest and lower at north and northwest area. In the 1-0.5 mm aggregate size it was lower in the central area and higher in the southwest. In the 0.5-0.25 mm aggregate size it was higher in the west part and lower in the north of the plot and. In the <0.25 mm no specific pattern was identified and the SWR was heterogeneously distributed. This suggests that the spatial distribution of SWR is very different according to the aggregate size. Future studies are needed in order to identify the causes and consequences of such dynamic. Acknowledgements The authors appreciated the support of the project "Litfire", Fire effects in Lithuanian soils and ecosystems (MIP-048/2011) funded by the Lithuanian Research Council References Diehl, D. (2013) Soil water repellency: Dynamics of heterogeneous surfaces, Colloids and surfaces A: Physicochem. Eng. Aspects, 432, 8-18. Doerr, S.H., Shakesby, R.A., and Walsh, R.P.D. (2000) Soil water repellency: its causes, characteristics and hydro-geomorphological significance, Earth-Science Reviews, 51, 33-65. Jordan, A., Zavala, L., Mataix-Solera, J., Nava, A.L., Alanis, N. (2011) Effects of fire severity on water repellency and agregate stability on mexican volcanic soils, Catena, 84, 136-147. Mataix-Solera, J., Doerr, S. (2004) hydrophobicity and agregate stability in calcareous topsoils from fire-affected pine forests in south-easthern Spain, Geoderma, 118, 77-88. Wessel, A.T. (1988) On using the effective contact angle and the water drop penetration time for classification of water repellency in dune soils, Earth Surfaces Process. Landforms, 13, 555-562, 1988.
A Bayesian approach to meta-analysis of plant pathology studies.
Mila, A L; Ngugi, H K
2011-01-01
Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework. Bayesian meta-analysis can readily include information not easily incorporated in classical methods, and allow for a full evaluation of competing models. Given the power and flexibility of Bayesian methods, we expect them to become widely adopted for meta-analysis of plant pathology studies.
A plume capture technique for the remote characterization of aircraft engine emissions.
Johnson, G R; Mazaheri, M; Ristovski, Z D; Morawska, L
2008-07-01
A technique for capturing and analyzing plumes from unmodified aircraft or other combustion sources under real world conditions is described and applied to the task of characterizing plumes from commercial aircraft during the taxiing phase of the Landing/Take-Off (LTO) cycle. The method utilizes a Plume Capture and Analysis System (PCAS) mounted in a four-wheel drive vehicle which is positioned in the airfield 60 to 180 m downwind of aircraft operations. The approach offers low test turnaround times with the ability to complete careful measurements of particle and gaseous emission factors and sequentially scanned particle size distributions without distortion due to plume concentration fluctuations. These measurements can be performed for individual aircraft movements at five minute intervals. A Plume Capture Device (PCD) collected samples of the naturally diluted plume in a 200 L conductive membrane conforming to a defined shape. Samples from over 60 aircraft movements were collected and analyzed in situ for particulate and gaseous concentrations and for particle size distribution using a Scanning Particle Mobility Sizer (SMPS). Emission factors are derived for particle number, NO(x), and PM2.5 for a widely used commercial aircraft type, Boeing 737 airframes with predominantly CFM56 class engines, during taxiing. The practical advantages of the PCAS include the capacity to perform well targeted and controlled emission factor and size distribution measurements using instrumentation with varying response times within an airport facility, in close proximity to aircraft during their normal operations.
Collective motion of groups of self-propelled particles following interacting leaders
NASA Astrophysics Data System (ADS)
Ferdinandy, B.; Ozogány, K.; Vicsek, T.
2017-08-01
In order to keep their cohesiveness during locomotion gregarious animals must make collective decisions. Many species boast complex societies with multiple levels of communities. A common case is when two dominant levels exist, one corresponding to leaders and the other consisting of followers. In this paper we study the collective motion of such two-level assemblies of self-propelled particles. We present a model adapted from one originally proposed to describe the movement of cells resulting in a smoothly varying coherent motion. We shall use the terminology corresponding to large groups of some mammals where leaders and followers form a group called a harem. We study the emergence (self-organization) of sub-groups within a herd during locomotion by computer simulations. The resulting processes are compared with our prior observations of a Przewalski horse herd (Hortobágy, Hungary) which we use as results from a published case study. We find that the model reproduces key features of a herd composed of harems moving on open ground, including fights for followers between leaders and bachelor groups (group of leaders without followers). One of our findings, however, does not agree with the observations. While in our model the emerging group size distribution is normal, the group size distribution of the observed herd based on historical data have been found to follow lognormal distribution. We argue that this indicates that the formation (and the size) of the harems must involve a more complex social topology than simple spatial-distance based interactions.
2015-01-01
Among co-occurring species, values for functionally important plant traits span orders of magnitude, are uni-modal, and generally positively skewed. Such data are usually log-transformed “for normality” but no convincing mechanistic explanation for a log-normal expectation exists. Here we propose a hypothesis for the distribution of seed masses based on generalised extreme value distributions (GEVs), a class of probability distributions used in climatology to characterise the impact of event magnitudes and frequencies; events that impose strong directional selection on biological traits. In tests involving datasets from 34 locations across the globe, GEVs described log10 seed mass distributions as well or better than conventional normalising statistics in 79% of cases, and revealed a systematic tendency for an overabundance of small seed sizes associated with low latitudes. GEVs characterise disturbance events experienced in a location to which individual species’ life histories could respond, providing a natural, biological explanation for trait expression that is lacking from all previous hypotheses attempting to describe trait distributions in multispecies assemblages. We suggest that GEVs could provide a mechanistic explanation for plant trait distributions and potentially link biology and climatology under a single paradigm. PMID:25830773
ERIC Educational Resources Information Center
Zimmerman, Donald W.
2011-01-01
This study investigated how population parameters representing heterogeneity of variance, skewness, kurtosis, bimodality, and outlier-proneness, drawn from normal and eleven non-normal distributions, also characterized the ranks corresponding to independent samples of scores. When the parameters of population distributions from which samples were…
Normal dimensions of the posterior pituitary bright spot on magnetic resonance imaging.
Côté, Martin; Salzman, Karen L; Sorour, Mohammad; Couldwell, William T
2014-02-01
The normal pituitary bright spot seen on unenhanced T1-weighted MRI is thought to result from the T1-shortening effect of the vasopressin stored in the posterior pituitary. Individual variations in its size may be difficult to differentiate from pathological conditions resulting in either absence of the pituitary bright spot or in T1-hyperintense lesions of the sella. The objective of this paper was to define a range of normal dimensions of the pituitary bright spot and to illustrate some of the most commonly encountered pathologies that result in absence or enlargement of the pituitary bright spot. The authors selected normal pituitary MRI studies from 106 patients with no pituitary abnormality. The size of each pituitary bright spot was measured in the longest axis and in the dimension perpendicular to this axis to describe the typical dimensions. The authors also present cases of patients with pituitary abnormalities to highlight the differences and potential overlap between normal and pathological pituitary imaging. All of the studies evaluated were found to have pituitary bright spots, and the mean dimensions were 4.8 mm in the long axis and 2.4 mm in the short axis. The dimension of the pituitary bright spot in the long axis decreased with patient age. The distribution of dimensions of the pituitary bright spot was normal, indicating that 99.7% of patients should have a pituitary bright spot measuring between 1.2 and 8.5 mm in its long axis and between 0.4 and 4.4 mm in its short axis, an interval corresponding to 3 standard deviations below and above the mean. In cases where the dimension of the pituitary bright spot is outside this range, pathological conditions should be considered. The pituitary bright spot should always be demonstrated on T1-weighted MRI, and its dimensions should be within the identified normal range in most patients. Outside of this range, pathological conditions affecting the pituitary bright spot should be considered.
Mohammadi, M; Chen, P
2015-09-01
Solid tumors with different microvascular densities (MVD) have been shown to have different outcomes in clinical studies. Other studies have demonstrated the significant correlation between high MVD, elevated interstitial fluid pressure (IFP) and metastasis in cancers. Elevated IFP in solid tumors prevents drug macromolecules reaching most cancerous cells. To overcome this barrier, antiangiogenesis drugs can reduce MVD within the tumor and lower IFP. A quantitative approach is essential to compute how much reduction in MVD is required for a specific tumor to reach a desired amount of IFP for drug delivery purposes. Here we provide a computational framework to investigate how IFP is affected by the tumor size, the MVD, and location of vessels within the tumor. A general physiologically relevant tumor type with a heterogenous vascular structure surrounded by normal tissue is utilized. Then the continuity equation, Darcy's law, and Starling's equation are applied in the continuum mechanics model, which can calculate IFP for different cases of solid tumors. High MVD causes IFP elevation in solid tumors, and IFP distribution correlates with microvascular distribution within tumor tissue. However, for tumors with constant MVD but different microvascular structures, the average values of IFP were found to be the same. Moreover, for a constant MVD and vascular distribution, an increase in tumor size leads to increased IFP. Copyright © 2015 Elsevier Inc. All rights reserved.
Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I
2003-01-01
Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel
Wang, Yabo; Wang, Xingyu; Le Bitoux, Marie-Aude; Wagnieres, Georges; Vandenbergh, Hubert; Gonzalez, Michel; Ris, Hans-Beat; Perentes, Jean Y; Krueger, Thorsten
2015-04-01
The pre-conditioning of tumor vessels by low-dose photodynamic therapy (L-PDT) was shown to enhance the distribution of chemotherapy in different tumor types. However, how light dose affects drug distribution and tumor response is unknown. Here we determined the effect of L-PDT fluence on vascular transport in human mesothelioma xenografts. The best L-PDT conditions regarding drug transport were then combined with Lipoplatin(®) to determine tumor response. Nude mice bearing dorsal skinfold chambers were implanted with H-Meso1 cells. Tumors were treated by Visudyne(®) -mediated photodynamic therapy with 100 mW/cm(2) fluence rate and a variable fluence (5, 10, 30, and 50 J/cm(2) ). FITC-Dextran (FITC-D) distribution was assessed in real time in tumor and normal tissues. Tumor response was then determined with best L-PDT conditions combined to Lipoplatin(®) and compared to controls in luciferase expressing H-Meso1 tumors by size and whole body bioluminescence assessment (n = 7/group). Tumor uptake of FITC-D following L-PDT was significantly enhanced by 10-fold in the 10 J/cm(2) but not in the 5, 30, and 50 J/cm(2) groups compared to controls. Normal surrounding tissue uptake of FITC-D following L-PDT was significantly enhanced in the 30 J/cm(2) and 50 J/cm(2) groups compared to controls. Altogether, the FITC-D tumor to normal tissue ratio was significantly higher in the 10 J/cm(2) group compared others. Tumor growth was significantly delayed in animals treated by 10 J/cm2-L-PDT combined to Lipoplatin(®) compared to controls. Fluence of L-PDT is critical for the optimal distribution and effect of subsequently administered chemotherapy. These findings have an importance for the clinical translation of the vascular L-PDT concept in the clinics. © 2015 Wiley Periodicals, Inc.
A Model for Hydraulic Properties Based on Angular Pores with Lognormal Size Distribution
NASA Astrophysics Data System (ADS)
Durner, W.; Diamantopoulos, E.
2014-12-01
Soil water retention and unsaturated hydraulic conductivity curves are mandatory for modeling water flow in soils. It is a common approach to measure few points of the water retention curve and to calculate the hydraulic conductivity curve by assuming that the soil can be represented as a bundle of capillary tubes. Both curves are then used to predict water flow at larger spatial scales. However, the predictive power of these curves is often very limited. This can be very easily illustrated if we measure the soil hydraulic properties (SHPs) for a drainage experiment and then use these properties to predict the water flow in the case of imbibition. Further complications arise from the incomplete wetting of water at the solid matrix which results in finite values of the contact angles between the solid-water-air interfaces. To address these problems we present a physically-based model for hysteretic SHPs. This model is based on bundles of angular pores. Hysteresis for individual pores is caused by (i) different snap-off pressures during filling and emptying of single angular pores and (ii) by different advancing and receding contact angles for fluids that are not perfectly wettable. We derive a model of hydraulic conductivity as a function of contact angle by assuming flow perpendicular to pore cross sections and present closed-form expressions for both the sample scale water retention and hydraulic conductivity function by assuming a log-normal statistical distribution of pore size. We tested the new model against drainage and imbibition experiments for various sandy materials which were conducted with various liquids of differing wettability. The model described both imbibition and drainage experiments very well by assuming a unique pore size distribution of the sample and a zero contact angle for the perfectly wetting liquid. Eventually, we see the possibility to relate the particle size distribution with a model which describes the SHPs.
Bagri, Akbar; Hanson, John P.; Lind, J. P.; ...
2016-10-25
We use high-energy X-ray diffraction microscopy (HEDM) to characterize the microstructure of Ni-base alloy 725. HEDM is a non-destructive technique capable of providing three-dimensional reconstructions of grain shapes and orientations in polycrystals. The present analysis yields the grain size distribution in alloy 725 as well as the grain boundary character distribution (GBCD) as a function of lattice misorientation and boundary plane normal orientation. We find that the GBCD of Ni-base alloy 725 is similar to that previously determined in pure Ni and other fcc-base metals. We find an elevated density of Σ9 and Σ3 grain boundaries. We also observe amore » preponderance of grain boundaries along low-index planes, with those along (1 1 1) planes being the most common, even after Σ3 twins have been excluded from the analysis.« less
Resistance distribution in the hopping percolation model.
Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad
2005-07-01
We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.
Crowding Effects in Vehicular Traffic
Combinido, Jay Samuel L.; Lim, May T.
2012-01-01
While the impact of crowding on the diffusive transport of molecules within a cell is widely studied in biology, it has thus far been neglected in traffic systems where bulk behavior is the main concern. Here, we study the effects of crowding due to car density and driving fluctuations on the transport of vehicles. Using a microscopic model for traffic, we found that crowding can push car movement from a superballistic down to a subdiffusive state. The transition is also associated with a change in the shape of the probability distribution of positions from a negatively-skewed normal to an exponential distribution. Moreover, crowding broadens the distribution of cars’ trap times and cluster sizes. At steady state, the subdiffusive state persists only when there is a large variability in car speeds. We further relate our work to prior findings from random walk models of transport in cellular systems. PMID:23139762
Prediction of Mean and Design Fatigue Lives of Self Compacting Concrete Beams in Flexure
NASA Astrophysics Data System (ADS)
Goel, S.; Singh, S. P.; Singh, P.; Kaushik, S. K.
2012-02-01
In this paper, result of an investigation conducted to study the flexural fatigue characteristics of self compacting concrete (SCC) beams in flexure are presented. An experimental programme was planned in which approximately 60 SCC beam specimens of size 100 × 100 × 500 mm were tested under flexural fatigue loading. Approximately 45 static flexural tests were also conducted to facilitate fatigue testing. The flexural fatigue and static flexural strength tests were conducted on a 100 kN servo-controlled actuator. The fatigue life data thus obtained have been used to establish the probability distributions of fatigue life of SCC using two-parameter Weibull distribution. The parameters of the Weibull distribution have been obtained by different methods of analysis. Using the distribution parameters, the mean and design fatigue lives of SCC have been estimated and compared with Normally vibrated concrete (NVC), the data for which have been taken from literature. It has been observed that SCC exhibits higher mean and design fatigue lives compared to NVC.
Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.
Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S
2017-01-05
The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1978-01-01
Yearly, monthly, and time of day fade statistics are presented and characterized. A 19.04 GHz yearly fade distribution, corresponding to a second COMSTAR beacon frequency, is predicted using the concept of effective path length, disdrometer, and rain rate results. The yearly attenuation and rain rate distributions follow with good approximation log normal variations for most fade and rain rate levels. Attenuations were exceeded for the longest and shortest periods of times for all fades in August and February, respectively. The eight hour time period showing the maximum and minimum number of minutes over the year for which fades exceeded 12 db were approximately between 1600 to 2400, and 0400 to 1200 hours, respectively. In employing the predictive method for obtaining the 19.04 GHz fade distribution, it is demonstrated theoretically that the ratio of attenuations at two frequencies is minimally dependent of raindrop size distribution providing these frequencies are not widely separated.
Krishnamoorthi, R; Anna Poorani, G
2016-01-01
Iris normalization is an important stage in any iris biometric, as it has a propensity to trim down the consequences of iris distortion. To indemnify the variation in size of the iris owing to the action of stretching or enlarging the pupil in iris acquisition process and camera to eyeball distance, two normalization schemes has been proposed in this work. In the first method, the iris region of interest is normalized by converting the iris into the variable size rectangular model in order to avoid the under samples near the limbus border. In the second method, the iris region of interest is normalized by converting the iris region into a fixed size rectangular model in order to avoid the dimensional discrepancies between the eye images. The performance of the proposed normalization methods is evaluated with orthogonal polynomials based iris recognition in terms of FAR, FRR, GAR, CRR and EER.
Particle Shape and Composition of NU-LHT-2M
NASA Technical Reports Server (NTRS)
Rickman, D. L.; Lowers, H.
2012-01-01
Particle shapes of the lunar regolith simulant NU-LHT-2M were analyzed by scanning electron microscope of polished sections. These data provide shape, size, and composition information on a particle by particle basis. 5,193 particles were measured, divided into four sized fractions: less than 200 mesh, 200-100 mesh, 100-35 mesh, and greater than 35 mesh. 99.2% of all particles were monominerallic. Minor size versus composition effects were noted in minor and trace mineralogy. The two metrics used are aspect ratio and Heywood factor, plotted as normalized frequency distributions. Shape versus composition effects were noted for glass and possibly chlorite. To aid in analysis, the measured shape distributions are compared to data for ellipses and rectangles. Several other simple geometric shapes are also investigated as to how they plot in aspect ratio versus Heywood factor space. The bulk of the data previously reported, which were acquired in a plane of projection, are between the ellipse and rectangle lines. In contrast, these data, which were acquired in a plane of section, clearly show that a significant number of particles have concave hulls in this view. Appendices cover details of measurement error, use of geometric shapes for comparative analysis, and a logic for comparing data from plane of projection and plane of section measurements.
Particle impactor assembly for size selective high volume air sampler
Langer, Gerhard
1988-08-16
Air containing entrained particulate matter is directed through a plurality of parallel, narrow, vertically oriented impactor slots of an inlet element toward an adjacently located, relatively large, dust impaction surface preferably covered with an adhesive material. The air flow turns over the impaction surface, leaving behind the relatively larger particles according to the human thoracic separation system and passes through two elongate exhaust apertures defining the outer bounds of the impaction collection surface to pass through divergent passages which slow down and distribute the air flow, with entrained smaller particles, over a fine filter element that separates the fine particles from the air. The elongate exhaust apertures defining the impaction collection surface are spaced apart by a distance greater than the lengths of elongate impactor slots in the inlet element and are oriented to be normal thereto. By appropriate selection of dimensions and the number of impactor slots air flow through the inlet element is provided a nonuniform velocity distribution with the lower velocities being obtained near the center of the impactor slots, in order to separate out particles larger than a certain predetermined size on the impaction collection surface. The impaction collection surface, even in a moderately sized apparatus, is thus relatively large and permits the prolonged sampling of air for periods extending to four weeks.
Statistical framework and noise sensitivity of the amplitude radial correlation contrast method.
Kipervaser, Zeev Gideon; Pelled, Galit; Goelman, Gadi
2007-09-01
A statistical framework for the amplitude radial correlation contrast (RCC) method, which integrates a conventional pixel threshold approach with cluster-size statistics, is presented. The RCC method uses functional MRI (fMRI) data to group neighboring voxels in terms of their degree of temporal cross correlation and compares coherences in different brain states (e.g., stimulation OFF vs. ON). By defining the RCC correlation map as the difference between two RCC images, the map distribution of two OFF states is shown to be normal, enabling the definition of the pixel cutoff. The empirical cluster-size null distribution obtained after the application of the pixel cutoff is used to define a cluster-size cutoff that allows 5% false positives. Assuming that the fMRI signal equals the task-induced response plus noise, an analytical expression of amplitude-RCC dependency on noise is obtained and used to define the pixel threshold. In vivo and ex vivo data obtained during rat forepaw electric stimulation are used to fine-tune this threshold. Calculating the spatial coherences within in vivo and ex vivo images shows enhanced coherence in the in vivo data, but no dependency on the anesthesia method, magnetic field strength, or depth of anesthesia, strengthening the generality of the proposed cutoffs. Copyright (c) 2007 Wiley-Liss, Inc.